Tweeted By @Thom_Wolf
Missed this great article on the Clever Hans effect in NLP
— Thomas Wolf (@Thom_Wolf) January 10, 2020
Many SOTA results we get with Bert-like models are due to these models "breaking" our datasets –in a bad sense– by exploiting their weaknesses@benbenhh's piece has nice overview & advice
Also follow @annargrs on this https://t.co/xPb5Nj87z9