A meta-learned transformer for tabular data: https://t.co/3rlNtJa2hc I was waiting for this to happen, and I'm pretty convinced that's where ML will go. From Frank Hutter's (imho legendary) lab!
— Andreas Mueller (@amuellerml) October 21, 2022
A meta-learned transformer for tabular data: https://t.co/3rlNtJa2hc I was waiting for this to happen, and I'm pretty convinced that's where ML will go. From Frank Hutter's (imho legendary) lab!
— Andreas Mueller (@amuellerml) October 21, 2022
MovieCLIP: Visual Scene Recognition in Movies
— AK (@_akhaliq) October 21, 2022
abs: https://t.co/WaF11rNcX1
project page: https://t.co/ZTK9PYWmcW pic.twitter.com/KyPsp3DkNt
DiffEdit: Diffusion-based semantic image editing with mask guidance
— AK (@_akhaliq) October 21, 2022
abs: https://t.co/9pzsPwiU1K pic.twitter.com/elAYmnMh2w
RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses
— AK (@_akhaliq) October 20, 2022
abs: https://t.co/j8i1CgQrYb pic.twitter.com/qG1VqFWwEb
A Unified View of Masked Image Modeling
— AK (@_akhaliq) October 20, 2022
abs: https://t.co/Kw4zgbpgXY pic.twitter.com/WS9RgFU4NJ
Most ML folks I know have @AnthropicAI's Toy Models of Superposition paper on their reading list, but too few have read it.
— Emmanuel Ameisen (@mlpowered) October 19, 2022
It is one of the most interesting interpretability paper I've read in a while and it can benefit anyone using deep learning.
Here are my takeaways! pic.twitter.com/XrQ3Pp6b6b
Token Merging: Your ViT But Faster
— AK (@_akhaliq) October 19, 2022
abs: https://t.co/14DPSqst90
github: https://t.co/oJMk2nSuvw pic.twitter.com/4hJKEZovEp
Differentially Private Diffusion Models
— AK (@_akhaliq) October 19, 2022
abs: https://t.co/IW2WU2ega5
project page: https://t.co/3gxE40jRu6 pic.twitter.com/HAVldjJDqG
UniTune: Text-Driven Image Editing by Fine Tuning an Image Generation Model on a Single Image
— AK (@_akhaliq) October 19, 2022
abs: https://t.co/AU8m80CjQD pic.twitter.com/A7UD1fLM9B
Model Criticism for Long-Form Text Generation (https://t.co/V59Z94D9Oy w/ Yuntian Deng & @volokuleshov)
— Sasha Rush (@srush_nlp) October 18, 2022
Researchers have observed that LM likelihood doesn't directly correlate with the emergence of long-form coherence. We quantify this through a model criticism framework. 1/
You Only Live Once: Single-Life Reinforcement Learning
— AK (@_akhaliq) October 18, 2022
abs: https://t.co/PG3tqv89DA pic.twitter.com/WrwzfA2Bg8
Table-To-Text generation and pre-training with TabT5
— AK (@_akhaliq) October 18, 2022
abs: https://t.co/YyZ9hUexVx pic.twitter.com/lkjdnMo1He