xFormers: Hackable and optimized Transformers building blocks, supporting a composable construction
— AK (@ak92501) October 20, 2021
github: https://t.co/HcAKAU2Mai pic.twitter.com/ZXTjcs70XK
xFormers: Hackable and optimized Transformers building blocks, supporting a composable construction
— AK (@ak92501) October 20, 2021
github: https://t.co/HcAKAU2Mai pic.twitter.com/ZXTjcs70XK
Boosting Contrastive Self-Supervised Learning with False Negative Cancellation
— AK (@ak92501) October 19, 2021
abs: https://t.co/hrLNic6CRL
github: https://t.co/FwpPAPxAhp pic.twitter.com/BNLRIZImul
T0 outperforms GPT3 on 9 out of 11 benchmarks despite being 16x smaller https://t.co/llLydO6euk pic.twitter.com/9cOg4zoGIZ
— Mark Riedl is a Metaverse Company (@mark_riedl) October 19, 2021
HRFormer: High-Resolution Transformer for Dense Prediction
— AK (@ak92501) October 19, 2021
abs: https://t.co/WuCrhSHWU3
github: https://t.co/tNfI7Ba1Go pic.twitter.com/Fa1n1k4eNt
Understanding and Improving Robustness of Vision
— AK (@ak92501) October 18, 2021
Transformers through Patch-based Negative
Augmentation
abs: https://t.co/mFJURTVAHN
show that patch-based negative augmentation consistently improves robustness of ViTs across a wide set of ImageNet based robustness benchmarks pic.twitter.com/CUQrtfdzxe
1/ After a year of work, our paper on mRNA Degradation is finally out!
— Bojan Tunguz (@tunguz) October 16, 2021
paper: https://t.co/s63ik0c3Ey
code: https://t.co/UWIPSbOvHH pic.twitter.com/ooT2wvvuah
Introducing a minimalist and effective approach for vision language model pre-training that learns a single representation from both visual and language inputs and efficiently leverages scaled datasets to achieve state-of-the-art performance. Learn more ↓ https://t.co/U9DY2CZbqR
— Google AI (@GoogleAI) October 15, 2021
Hey! Got 40 seconds? ⏱️ Learn how we achieve photorealistic reposing and virtual try-on in the upcoming SIGGRAPH Asia paper *Pose with Style*. 🤩
— Jia-Bin Huang (@jbhuang0604) October 15, 2021
Paper: https://t.co/UtVyBn8eA3
Web: https://t.co/b55nH3SSDB
Brought to you by the amazing @BadourAlBahar! pic.twitter.com/Xf9jYFaOY5
bert2BERT: Towards Reusable Pretrained Language Models
— AK (@ak92501) October 15, 2021
abs: https://t.co/x7Six076zh pic.twitter.com/9ZJNvpjf8k
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
— AK (@ak92501) October 15, 2021
abs: https://t.co/tvnpkIUywh
symbolic knowledge distillation, model-to-corpus-to-model pipeline for commonsense that does not require human-authored knowledge–instead, using machine generation pic.twitter.com/XDjvNABTUF
Active Learning for Deep Object Detection via Probabilistic Modeling
— AK (@ak92501) October 14, 2021
abs: https://t.co/u5x9EA2tgZ
github: https://t.co/SRylg7UWOY pic.twitter.com/MjLgaDsK4E
ML can help us build a black box system, but can also help us analyze a black box system. @LeonYin went above & beyond to use RF to analyze Amazon's product ranking system.
— Kyunghyun Cho (@kchonyc) October 14, 2021
insightful! https://t.co/N6BFzhlxgo