BERT's success in some benchmarks tests may be simply due to the exploitation of spurious statistical cues in the dataset. Without them it is no better then random. https://t.co/o3i8FtmC8z
— /MachineLearning (@slashML) July 21, 2019
BERT's success in some benchmarks tests may be simply due to the exploitation of spurious statistical cues in the dataset. Without them it is no better then random. https://t.co/o3i8FtmC8z
— /MachineLearning (@slashML) July 21, 2019
Contrary to popular belief, training a gigantic model on a humongous dataset of human text will not lead to AGI. 🙈🧠
— hardmaru (@hardmaru) July 22, 2019
Probing Neural Network Comprehension of Natural Language Arguments: https://t.co/3AtozIV03O https://t.co/Y5zg0Dfgke
Here's the link to their adversarial dataset for argument comprehension. Would've been nice (and made the paper more interesting) if they also provided more interesting adversarial examples than just one (about Google 😉), perhaps in the Appendix section. https://t.co/WUhuTR0gq6
— hardmaru (@hardmaru) July 22, 2019
Data leakage and model cheating are hot topics this week: https://t.co/QdKNwDmTin
— Peter Skomoroch (@peteskomoroch) July 23, 2019