GLUE now has a human performance estimate for all nine tasks! Overall score 86.9. Thanks to @meloncholist. https://t.co/GNGW55uo5M
— Sam Bowman (@sleepinyourhat) February 3, 2019
GLUE now has a human performance estimate for all nine tasks! Overall score 86.9. Thanks to @meloncholist. https://t.co/GNGW55uo5M
— Sam Bowman (@sleepinyourhat) February 3, 2019
This is a super cool resource: Papers With Code now includes 950+ ML tasks, 500+ evaluation tables (including SOTA results) and 8500+ papers with code. Probably the largest collection of NLP tasks I've seen including 140+ tasks and 100 datasets.https://t.co/lTAGE7LGZY pic.twitter.com/wfSyTplBR3
— Sebastian Ruder (@seb_ruder) February 1, 2019
There's a neat new position/analysis paper from @DeepMindAI on cross-task generalization in NLP (https://t.co/0eLCOO897Y; @redpony, @aggielaz are the only authors I can quickly find on Twitter). pic.twitter.com/UHU9K1eEq6
— Sam Bowman (@sleepinyourhat) February 1, 2019
The Evolved Transformer: They perform architecture search on Transformer's stackable cells for seq2seq tasks.
— hardmaru (@hardmaru) February 1, 2019
“A much smaller, mobile-friendly, Evolved Transformer with only ~7M parameters outperforms the original Transformer by 0.7 BLEU on WMT14 EN-DE.”https://t.co/ABtfdTGIYl pic.twitter.com/Rso7GUiDe9
spacy-stanfordnlp - 💥 Use the latest StanfordNLP research models directly in spaCy https://t.co/CsOXdw4qbO
— Python Trending (@pythontrending) February 1, 2019
Take your NLP to the next level with LASER, a newly open-sourced @facebookai
— DataScienceNigeria (@DataScienceNIG) February 1, 2019
natural language processing toolkit, which performs zero-shot cross-lingual transfer with >90 languages- including languages where training data is extremely limited.
GitHub: https://t.co/DAKPZGzgPd pic.twitter.com/10wpiWzAOs
"AllenNLP is a truly wonderful piece of software. ... There is a lack of appreciation for good coding standards in the data science community. AllenNLP is a nice exception to this rule"https://t.co/HFQvnnr9mv
— Joel Grus (@joelgrus) February 1, 2019
This is a cool Colaboratory notebook that shows how you can apply ML and NLP to the content of your own @feedly feeds.https://t.co/9RTmkiF539 pic.twitter.com/HqebWvl0AE
— Sebastian Ruder (@seb_ruder) January 28, 2019
Nice bit of academic sociology in the BERT slides. pic.twitter.com/4wmoibM02t
— Sam Bowman (@sleepinyourhat) January 28, 2019
LASER - A library to calculate and use multilingual sentence embeddings. https://t.co/SRICGx8Vf3 #python
— Python Weekly (@PythonWeekly) January 28, 2019
"A BERT Baseline for the Natural Questions," Alberti et al.: https://t.co/FGU9b9xpwf
— Miles Brundage (@Miles_Brundage) January 28, 2019
Pytorch implementation of RA-retrofit https://t.co/iwaBgQs4tZ #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) January 27, 2019