Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by Thom_Wolf on 2018-08-29 (UTC).

A very nice paper for those interested in Transformers for NLP (and if you are not, you should!) Give insights on why these models improve on high level metrics like BLEU/ppl. I like that they went the extra mile to get a real comparison to the (interesting) results of @ketran! https://t.co/9qoOWArtlD

— Thomas Wolf (@Thom_Wolf) August 29, 2018
learningnlpsurvey
by ml_review on 2018-08-31 (UTC).

Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures
w/ @RicoSennrich

1) self-attentional & CNNs do not outperform RNNs in subject-verb agreement
2) self-attentional outperform RNNs & CNNs on word sense disambiguationhttps://t.co/TIvuYMUuLe pic.twitter.com/Og01mkqs5K

— ML Review (@ml_review) August 31, 2018
nlpsurvey

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib