Exciting new dataset, and don't miss the appendix for this 🔥 https://t.co/Eeb2ZSVthP pic.twitter.com/1Df8akE1J2
— Jacob Eisenstein (@jacobeisenstein) January 2, 2021
Exciting new dataset, and don't miss the appendix for this 🔥 https://t.co/Eeb2ZSVthP pic.twitter.com/1Df8akE1J2
— Jacob Eisenstein (@jacobeisenstein) January 2, 2021
Real-Time High-Resolution Background Matting
— AK (@ak92501) December 31, 2020
pdf: https://t.co/XUkgi62KJb
Use it in @runwayml: https://t.co/Qji6opRQeT
By that measure, MSR's model is somewhat better than T5 or RoBERTa, but it still falls back on stereotypes substantially more often than humans. We know that LMs pick up stereotypes from their training data, and that's not something we can easily counteract. Proceed with caution.
— Prof. Sam Bowman (@sleepinyourhat) December 30, 2020
More progress on the SuperGLUE NLU leaderboard (https://t.co/ipzFoqJiyU), from an MSR team including @AllenLao and @JianfengGao0217, with a larger version of their DeBERTa: https://t.co/8WzBjgk16q pic.twitter.com/PLjVVAxCrO
— Prof. Sam Bowman (@sleepinyourhat) December 30, 2020
After reading @ChristophMolnar 's Interpretable ML book this year, I am always on the lookout for new techniques and implementation. Just saw the DALEX paper today ("Responsible ML with Interactive Explainability and Fairness in Python") Cool stuff: https://t.co/3jgZs5zoOB pic.twitter.com/YHMYQ1kfvo
— Sebastian Raschka (@rasbt) December 30, 2020
This is just the perfect content at the end of the year: "2020: A Year Full of Amazing AI papers- A Review". Great list of Machine & Deep Learning breakthroughs this year with links to the paper and nice, short video explanations: https://t.co/j444zLPI5L pic.twitter.com/Lvj3U2QQYt
— Sebastian Raschka (@rasbt) December 29, 2020
Efficient transformer architectures for vision.
— Yann LeCun (@ylecun) December 24, 2020
Open source. https://t.co/MTCgPRgr6k
Training data-efficient image transformers & distillation through attention
— AK (@ak92501) December 24, 2020
pdf: https://t.co/52o44tGC5G
abs: https://t.co/XohbkS7kq5
github: https://t.co/KpkMSwfKdp pic.twitter.com/KP24f00vT4
data augmentation is really all you need, huh? https://t.co/amc2GP81XG pic.twitter.com/MLH6Ulp3JE
— Kyunghyun Cho (@kchonyc) December 22, 2020
Hypersim: Photorealistic Synthetic Dataset for Indoor Scene Understanding
— hardmaru (@hardmaru) December 22, 2020
Photorealistic synthetic scenes have the advantage of giving us as many ground truth layers as we want to train an ML system. But is it enough for sim2real?https://t.co/U4qXXKn6qFhttps://t.co/rTnWT4XvEE pic.twitter.com/aRa5K6unWh
TabNet: Attentive Interpretable Tabular Learning
— ML Review (@ml_review) December 20, 2020
By @sercanarik @tomaspfister
Automates feature engineering for tabular models
Learns representations through unsupervised pre-training to predict masked features + supervised fine-tuning https://t.co/4xDY5O24tC pic.twitter.com/clKlkWH4sp
Taming Transformers for High-Resolution Image Synthesis
— AK (@ak92501) December 18, 2020
pdf: https://t.co/fRwnXjKahS
abs: https://t.co/s9e42zZrrV
project page: https://t.co/aiA2PlSODq pic.twitter.com/emVvlP2vcg