Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
— AK (@ak92501) January 14, 2021
pdf: https://t.co/eb1qvqaO7I
abs: https://t.co/rw45sireIH
github: https://t.co/C6Po5bcKKt pic.twitter.com/KFKVBEbsfA
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
— AK (@ak92501) January 14, 2021
pdf: https://t.co/eb1qvqaO7I
abs: https://t.co/rw45sireIH
github: https://t.co/C6Po5bcKKt pic.twitter.com/KFKVBEbsfA
Excited to announce that our Deep Bootstrap framework for understanding generalization in deep learning has been accepted @iclr_conf! #ICLR2021 https://t.co/LhcK6VL6zP
— Hanie Sedghi (@HanieSedghi) January 13, 2021
Overview of the amazing progress in deep learning and medical computer vision. Published in @nature #digitalmedicine with the great @AndreEsteva, @katherinechou, @syeung10, @nikhil_ai, @thisismadani, @samottaghi, @yun_liu, @EricTopol,@JeffDean https://t.co/7NYRIQlt5N pic.twitter.com/4GamYwRJOd
— Richard Socher (@RichardSocher) January 12, 2021
Differentiable Vector Graphics Rasterization for Editing and Learning (SIGGRAPH Asia 2020)
— hardmaru (@hardmaru) January 12, 2021
Nice work that allows backpropagation through an image rasterizer, so we can apply the goodies that work on pixel images to vector graphicshttps://t.co/s6ooUYKsqnhttps://t.co/9jMZY7lPdS pic.twitter.com/TBEZYI3cVQ
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
— AK (@ak92501) January 12, 2021
pdf: https://t.co/0i6fcOuy4X
abs: https://t.co/AUKgennqZy
github: https://t.co/8QD4sJ2ckE pic.twitter.com/iDPDXj4bRR
Offline model-based RL for goal reaching: learn a distance "Q-like" function from offline data, and a video prediction model, then use them to accomplish visually indicated goals.
— Sergey Levine (@svlevine) January 10, 2021
w/ Stephen Tian et al.https://t.co/pmXL8fGHXvhttps://t.co/x9XXI7PN06
🧵> pic.twitter.com/G3t23nBWXo
IIUC this "new technique" from Facebook is actually just a slight repackaging of bits of @wightmanr's brilliant timm library.
— Jeremy Howard (@jeremyphoward) January 6, 2021
Great they wrote a paper that documented how well it works, but they should at *minimum* have cited timm, and really should have made him senior author https://t.co/53hVEjGRlG
🔥New Video🔥 EVERYBODY is talking about @OpenAI's new DALL·E model 👀 It takes any piece of text and turns it into an image, absolutely crazy 😱 Watch the video to learn more💪https://t.co/qB3n6R9zo0#DALLE @ilyasut @_jongwook_kim @MikhailPavlov5 @gabeeegoooh @scottgray76 pic.twitter.com/4EwRp5wShp
— Yannic Kilcher, the YouTube guy (@ykilcher) January 6, 2021
An NN takes a list of category names, and outputs (in a zero-shot manner) a visual classifier.
— Ilya Sutskever (@ilyasut) January 5, 2021
It beats RN50 on ImageNet zero-shot, while being far more robust to unusual images:https://t.co/dGdtFlbI7G pic.twitter.com/tjYdMfFcfe
Impressive and surprising https://t.co/9QpeWbhQFU I use those words sparingly pic.twitter.com/X0TC6h4fh2
— Andrej Karpathy (@karpathy) January 5, 2021
— Kyunghyun Cho (@kchonyc) January 5, 2021
Amazing to see progress in adapting language models to unseen languages, like Tibetan script.https://t.co/sM8orQshL9 https://t.co/bgduV56Wtq
— hardmaru (@hardmaru) January 4, 2021