Tired of word clouds?
— ComputationlStoryLab (@compstorylab) May 6, 2020
Want to raise your text analysis figures to the power of science?
Try this "Word-shift graph" code by @ryanjgallag:https://t.co/N8tcpCdzHt pic.twitter.com/rBv0kRxqoA
Tired of word clouds?
— ComputationlStoryLab (@compstorylab) May 6, 2020
Want to raise your text analysis figures to the power of science?
Try this "Word-shift graph" code by @ryanjgallag:https://t.co/N8tcpCdzHt pic.twitter.com/rBv0kRxqoA
Will your next ophthalmology paper make it into a top journal? Why not use a Keras model to find out?https://t.co/lh045AoXmu
— François Chollet (@fchollet) May 6, 2020
Would be interesting to do feature importance analysis here
Something a little different for my livestream tomorrow: instead of live coding I'm going to be doing a paper reading! We'll be reading "A Primer in BERTology: What we know about how BERT works" by @annargrs, Olga Kovaleva & @arumshisky. Come join. 😊☕️📑https://t.co/iHymNuwVuX
— Rachael Tatman (@rctatman) May 5, 2020
Synthesizer: Rethinking Self-Attention in Transformer Models
— ML Review (@ml_review) May 5, 2020
By @ytay017 @dara_bahri @MetzlerDonald
(1) random alignment matrices perform surprisingly well
(2) learning attention weights from (query-key) interactions are not so importanthttps://t.co/pGwh83gilU pic.twitter.com/4LlWGaEAIB
Jina - Cloud-native neural search framework powered by state-of-the-art AI and deep learning. https://t.co/8uprkplmWR #Python #AI #DeepLearning pic.twitter.com/wRbRAZlfCt
— Python Weekly (@PythonWeekly) May 4, 2020
Facebook has open-sourced a new chatbot that it claims is more engaging and sounds more human than Google’s. https://t.co/RVpUUutAG3
— MIT Technology Review (@techreview) May 1, 2020
"This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning.
— 👩💻 DynamicWebPaige @ 127.0.0.1 🏠 (@DynamicWebPaige) May 1, 2020
We also provide a dataset of 1.39M+ instances automatically labeled for politeness."https://t.co/sKXGOId2mu#KindTwitter❤️ pic.twitter.com/y1XA9NylP6
Much of the world’s information is stored in table form. TAPAS, a new model based on the #BERT architecture, is optimized for parsing tabular data over a wide range of structures and domains for application to question-answering tasks. Learn more below: https://t.co/U9zRxaUvik
— Google AI (@GoogleAI) April 30, 2020
Blender: large-scale, open-source, open-domain chatbot from FAIR. https://t.co/M86wy5YwMt
— Yann LeCun (@ylecun) April 30, 2020
UPDATE: We have spent the past month “fine-tuning” our approach for Closed Book QA (CBQA, no access to external knowledge) w/ T5 and now our appendix is overflowing with interesting results and new SoTAs on open domain WebQuestions and TriviaQA! https://t.co/16Mu1MR6kW
— Adam Roberts (@ada_rob) April 29, 2020
(1/7) https://t.co/gvzixqpela
Nice Medium on how to serve @huggingface BERT in production with pytorch/serve by MFreidank
— Julien Chaumond (@julien_c) April 27, 2020
Hat/tip @joespeezhttps://t.co/cL3BYUi5a4 pic.twitter.com/ryyfbyZLji
I'm a big fan of the visualization of #ICLR2020 papers by @srush_nlp.https://t.co/qwrf8pqJGX
— Sebastian Ruder (@seb_ruder) April 27, 2020
Searching for "nlp", there are some notable clusters: The two largest clusters are about pre-training (right) and Transformers (left). pic.twitter.com/bmxqQwlkbT