ByteTrack: Multi-Object Tracking by Associating Every Detection Box
— AK (@ak92501) October 14, 2021
abs: https://t.co/x2es6mhWh4
github: https://t.co/AuNaKamE9s@XinggangWang pic.twitter.com/dsVm5TzVb0
ByteTrack: Multi-Object Tracking by Associating Every Detection Box
— AK (@ak92501) October 14, 2021
abs: https://t.co/x2es6mhWh4
github: https://t.co/AuNaKamE9s@XinggangWang pic.twitter.com/dsVm5TzVb0
Dict-BERT: Enhancing Language Model Pre-training with Dictionary
— AK (@ak92501) October 14, 2021
abs: https://t.co/VB3gOgRRii pic.twitter.com/KbU8FxHHeZ
Object-Region Video Transformers
— AK (@ak92501) October 14, 2021
abs: https://t.co/X0dS3HSYaT
project page: https://t.co/yDAErxp4zE
an object-centric approach that extends video transformer layers with a block that directly incorporates object representations pic.twitter.com/b7ozNo7I0B
Dynamic Inference with Neural Interpreters
— AK (@ak92501) October 14, 2021
abs: https://t.co/GoZJdmPn0K pic.twitter.com/NOMj7y1QMV
Medical image classification models often pre-train on natural image datasets. Today, we present alternative approaches that use additional pre-training on medical images, along with metadata-based data augmentation, to significantly improve performance. https://t.co/DHy0XojZwm
— Google AI (@GoogleAI) October 13, 2021
Data-free Knowledge Distillation for Object Detection
— AK (@ak92501) October 13, 2021
pdf: https://t.co/y6ZqK9jenm
github: https://t.co/ncFsFtIsTX pic.twitter.com/e7OwRU9F7C
Distilling Image Classifiers in Object Detectors
— AK (@ak92501) October 13, 2021
abs: https://t.co/yVJW5hQOk5
github: https://t.co/8JVvrROA1f pic.twitter.com/9iSqm17Upo
TAda! Temporally-Adaptive Convolutions for Video Understanding
— AK (@ak92501) October 13, 2021
abs: https://t.co/uv6Jqnu580 pic.twitter.com/Suh5Te7P67
Causal ImageNet: How to discover spurious features in Deep Learning?
— AK (@ak92501) October 12, 2021
abs: https://t.co/NvelZA7Aa4 pic.twitter.com/OhFgarP17L
Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning
— AK (@ak92501) October 12, 2021
abs: https://t.co/2bT9if0KTH
singleton language model with 245B parameters, sota results on natural language processing tasks. high-quality Chinese corpus with 5TB high quality texts pic.twitter.com/4p6qxxDm0Y
CLIP-Adapter: Better Vision-Language Models with Feature Adapters
— AK (@ak92501) October 12, 2021
abs: https://t.co/keCbjWjil8 pic.twitter.com/Z2uqatQeIH
A Few More Examples May Be Worth Billions of Parameters
— AK (@ak92501) October 12, 2021
abs: https://t.co/UaR7ANxkzq
github: https://t.co/U00DR6CMn5 pic.twitter.com/S30zjBTLm3