Tweeted By @pranavrajpurkar
@GoogleAI's BERT (by Jacob Devlin and others) just rocked our @stanfordnlp SQuAD1.1 benchmark for human-level performance on reading comprehension. Key idea is masked language models to enable pre-trained deep bidirectional representations. Likely big advancement for NLP! pic.twitter.com/9Z4P8f81NH
— Pranav Rajpurkar (@pranavrajpurkar) October 12, 2018