Amazing new application: paste the text of a programming problem in to the app, and it automatically suggests techniques to solve the problem.
— Jeremy Howard (@jeremyphoward) January 24, 2019
Uses ULMFiT and fastai/@PyTorch.https://t.co/Td8mqKKwzN
Amazing new application: paste the text of a programming problem in to the app, and it automatically suggests techniques to solve the problem.
— Jeremy Howard (@jeremyphoward) January 24, 2019
Uses ULMFiT and fastai/@PyTorch.https://t.co/Td8mqKKwzN
natural-questions - https://t.co/TZKJ8QcPyE
— Python Trending (@pythontrending) January 24, 2019
Natural Questions: A new QA dataset consisting of 300,000+ naturally occurring questions (posed to Google search) with human provided long & short answers based on Wikipedia. Looks like an exciting new benchmark!
— Sebastian Ruder (@seb_ruder) January 24, 2019
Paper: https://t.co/JzRg3E8VQg
Competition: https://t.co/avU2M0Wa9U pic.twitter.com/DIsl11cvX0
I believe this new dataset that we just released is going to be a pretty challenging and interesting one for NLP and question answering research.
— Jeff Dean (@JeffDean) January 23, 2019
"where is blood pumped after it leaves the right ventricle?"
"who did hawaii belong to before 1959 purchase?" https://t.co/uREZAyKd4H
Great post explaining what BLEU is, some of the drawbacks, alternate metrics, and some things to keep in mind for sequence to sequence modeling in NLP @rctatman https://t.co/3tXaYM7uCW
— Rachel Thomas (@math_rachel) January 23, 2019
2. A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks by @SanhEstPasMoi, @Thom_Wolf, & me: A novel hierarchical MTL model that achieves SotA on NER, entity mention detection, and relation extraction. https://t.co/SZAlnp5ZK6
— Sebastian Ruder (@seb_ruder) January 21, 2019
Say hi if you’re there! pic.twitter.com/XPjlC98HOk
Very interesting #nlp paper by @facebookai and @stanfordnlp https://t.co/RDE6N3fdC8
— Xavier🎗🤖🏃 (@xamat) January 18, 2019
Impressive that there's an accompanying article on @VentureBeat
on the same day the paper is uploaded to arxiv 😲https://t.co/b4kALQR1oE
A chatbot that learns by chatting.
— Yann LeCun (@ylecun) January 17, 2019
Brought to you by FAIR.
"Learning from Dialogue after Deployment: Feed Yourself, Chatbot!", by Braden Hancock, Antoine... https://t.co/rbnsUWDdJf
Yes! Such good service work. This may be the last piece of core NLP blackbox infrastructure that only exists in Perl. https://t.co/HOONfvAUse
— harvardnlp (@harvardnlp) January 16, 2019
If you're interested in interpretability and better understanding #NLProc models 🔎, read this excellent TACL '19 survey by @boknilev. Clearly covers important research areas.
— Sebastian Ruder (@seb_ruder) January 11, 2019
Paper: https://t.co/NPhA4UaUwC
Appendix (categorizing all methods): https://t.co/a8mFNzNd7i
Compared to RNNs, the Transformer family of architectures seemingly scale to hundreds of millions of parameters with relative ease. pic.twitter.com/Xo7Ng5b2Gj
— hardmaru (@hardmaru) January 11, 2019
I like @yoavgo experiments! Self-contained with open-code, they are an invitation to participate!
— Thomas Wolf (@Thom_Wolf) January 11, 2019
I've just added OpenAI GPT (by @AlecRad) to the BERT repo so I conducted a few LM vs. Masked-LM Transformer experiments.
Surprise: OpenAI GPT tops BERT on some experiments [1/3] https://t.co/DmOZExyBBu