Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by OriolVinyalsML on 2018-07-12 (UTC).

Universal Transformers propose to augment Transformers with Recurrence in depth and Adaptive Computation Time. This model outperforms Vanilla Transformers in MT / bAbI / LA / LTE.

Paper: https://t.co/U2YAeuO6EO
Code: Soon in https://t.co/KSuQAkn5Jh pic.twitter.com/lCKfsEAswG

— Oriol Vinyals (@OriolVinyalsML) July 12, 2018
researchw_code
by GoogleAI on 2018-08-15 (UTC).

Check out Universal Transformers, new research from the Google Brain team & @DeepMindAI that extends last year's Transformer (a neural network architecture based on a self-attention mechanism) to be computationally universal. https://t.co/7loIFE9msM

— Google AI (@GoogleAI) August 15, 2018
research

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib