Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by yoavgo on 2019-01-06 (UTC).

I expected the Transformer-based BERT models to be bad on syntax-sensitive dependencies, compared to LSTM-based models.

So I run a few experiments. I was mistaken, they actually perform *very well*.

More details in this tech report: https://t.co/6hV9YoOvN8 pic.twitter.com/O0YwRnp7QH

— (((ل()(ل() 'yoav)))) (@yoavgo) January 6, 2019
nlpresearchsurvey
by Thom_Wolf on 2019-01-11 (UTC).

I like @yoavgo experiments! Self-contained with open-code, they are an invitation to participate!

I've just added OpenAI GPT (by @AlecRad) to the BERT repo so I conducted a few LM vs. Masked-LM Transformer experiments.

Surprise: OpenAI GPT tops BERT on some experiments [1/3] https://t.co/DmOZExyBBu

— Thomas Wolf (@Thom_Wolf) January 11, 2019
nlp

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib