Here is the code for the above screen-cap: https://t.co/2DQFxELrx9
— Thomas Wolf (@Thom_Wolf) June 15, 2020
Here is the code for the above screen-cap: https://t.co/2DQFxELrx9
— Thomas Wolf (@Thom_Wolf) June 15, 2020
Today we describe a #MachineLearning approach combining #NaturalLanguageProcessing with #ComputerVision to automatically extract data from structured documents—invoices, receipts, etc.—with the potential to streamline many business workflows. Learn more at https://t.co/ed87XgCTbn pic.twitter.com/6uA9GTVzMT
— Google AI (@GoogleAI) June 12, 2020
This week at the 🤗 Reading Group, we dived into "Evaluation w/ Contrast Sets" by @nlpmattg et al.
— Hugging Face (@huggingface) June 12, 2020
It presents a new evaluation paradigm for NLP models that aims at testing the local alignment of the decision boundary w/ the "true" decision boundary.https://t.co/ST7tXLiRUH pic.twitter.com/LhnNElOLpq
.@VioletNPeng wrote a paper that produced shockingly #racist and #sexist paragraphs without any cherry picking. For @OpenAI to launch this during #BlackLivesMattters is tone deaf. pic.twitter.com/6q3szp0Mm1
— Prof. Anima Anandkumar (@AnimaAnandkumar) June 11, 2020
New study of BERT fine-tuning from @asapptech: Revisiting Few-sample BERT Fine-tuning (https://t.co/NU82CPQ4WF) with @Tianyi_Zh Felix Wu @katiyar_arzoo @kilianq
— Yoav Artzi (@yoavartzi) June 11, 2020
A sample of the results in the thread. pic.twitter.com/Drhv1adKnN
A video dissection of the paper from FAIR on program translation from one language to another in an unsupervised manner:https://t.co/Kr7hZ60IQs https://t.co/aLKCIMZJEw
— Yann LeCun (@ylecun) June 9, 2020
Microsoft fires most of its human news editors, replacing them with AI. Then things get weirder and weirder (thread) https://t.co/nEnJhA6SD4
— Janelle Shane (@JanelleCShane) June 9, 2020
Presenting PEGASUS, an approach to pre-training, that uses gap-sentence generation to improve the performance of fine-tuning for #NaturalLanguageUnderstanding tasks, like abstractive summarization. Read more and try the code for yourself ↓ https://t.co/bVFCKGXZMI
— Google AI (@GoogleAI) June 9, 2020
New code example on https://t.co/m6mT8SrKDD: generating text with a miniature version of GPT. Trained on IMDB movie reviews. Made by @NandanApoorv https://t.co/IPziHaTuo5
— François Chollet (@fchollet) June 9, 2020
Nice post about technology behind recent major improvements to Google Translate.
— Jeff Dean (@🏡) (@JeffDean) June 9, 2020
I love the animation showing how much translation quality has improved across so many languages over the past 13 yrs (most recent improvements give +5 BLEU points across all 100+ languages!) https://t.co/GZ7oopWmFr
Training a multilingual translation system to translate programs from one programming language to another.
— Yann LeCun (@ylecun) June 8, 2020
No supervision.
The correctness is checked by compiling and running unit tests.
From FAIR-Paris (which turned 5 years old today). https://t.co/hYbebTSh7I
How big should my language model be? As NLP researchers and practitioners, that question is central. We have built a tool that calculates an optimal model size and training time for your budget so you don't have to. See it in action at https://t.co/WP4xlLet18! [1/2] pic.twitter.com/Cv2V9XTLLm
— Hugging Face (@huggingface) June 8, 2020