Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by pranavrajpurkar on 2018-10-12 (UTC).

@GoogleAI's BERT (by Jacob Devlin and others) just rocked our @stanfordnlp SQuAD1.1 benchmark for human-level performance on reading comprehension. Key idea is masked language models to enable pre-trained deep bidirectional representations. Likely big advancement for NLP! pic.twitter.com/9Z4P8f81NH

— Pranav Rajpurkar (@pranavrajpurkar) October 12, 2018
researchnlp
by Thom_Wolf on 2018-10-12 (UTC).

BERT is super impressive!
Amazing development of the nice OpenAI GPT!

Human level already reached on the recent SWAG dataset (EMNLP'18)!
I'm wondering if we should consider the task "solved" or if we could/should update such an adversarially generated dataset? pic.twitter.com/GIJUFrJpUu

— Thomas Wolf (@Thom_Wolf) October 12, 2018
researchnlp
by seb_ruder on 2018-10-12 (UTC).

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding:
SOTA on 11 tasks. Main additions:
- Bidirectional LM pretraining w/ masking
- Next-sentence prediction aux task
- Bigger, more data
It seems LM pretraining is here to stay.https://t.co/lV8TkBXxY5

— Sebastian Ruder (@seb_ruder) October 12, 2018
nlpresearch
by seb_ruder on 2018-10-12 (UTC).

It's amazing how fast #NLProc is moving these days.
We have now reached super-human performance on SWAG, a commonsense task that will only be introduced at @emnlp2018 in November!
We need even more challenging tasks!
BERT: https://t.co/jJmVoH1632
SWAG: https://t.co/jblbPLLvj6 pic.twitter.com/n3ufh6hue2

— Sebastian Ruder (@seb_ruder) October 12, 2018
researchnlp
by Tim_Dettmers on 2018-10-12 (UTC).

This is the most important step in NLP in months — big! Make sure to read the BERT paper even if you are doing CV etc! Simple, but lots of compute. What does it mean for NLP? We do not know yet, but it will change how we do NLP and think about it for sure https://t.co/3N5LhFHsSj

— Tim Dettmers (@Tim_Dettmers) October 12, 2018
researchnlp

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib