Blender Bot 2.0: An open source chatbot that builds long-term memory and searches the internet
— Kyunghyun Cho (@kchonyc) July 16, 2021
very cool! https://t.co/D15jBGCBNT
Blender Bot 2.0: An open source chatbot that builds long-term memory and searches the internet
— Kyunghyun Cho (@kchonyc) July 16, 2021
very cool! https://t.co/D15jBGCBNT
Per-Pixel Classification is Not All You Need for Semantic Segmentation
— AK (@ak92501) July 14, 2021
pdf: https://t.co/lG6ZYV8XBp
github: https://t.co/bXqZ6pR3Fb
outperforms both current sota semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models pic.twitter.com/RqNInMJQOm
Rethinking Positional Encoding
— AK (@ak92501) July 7, 2021
pdf: https://t.co/J6wrROIYLk
abs: https://t.co/eIRZEaAXg7
github: https://t.co/FHxaSYLt0m pic.twitter.com/I2nBemH2b3
Global Filter Networks for Image Classification
— AK (@ak92501) July 2, 2021
pdf: https://t.co/dIeGFqtllM
abs: https://t.co/48uTA872An
project page: https://t.co/LyAIupelxl
github: https://t.co/0BcTRgg4pJ pic.twitter.com/aVds2AwhCC
AutoFormer: Searching Transformers for Visual Recognition
— AK (@ak92501) July 2, 2021
pdf: https://t.co/BfcLzNpd2I
abs: https://t.co/pFSpFDrBOZ
github: https://t.co/SBeDmRhmET
AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively pic.twitter.com/kC8DykvoiM
CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders
— AK (@ak92501) June 29, 2021
pdf: https://t.co/q7hsXvs6bY
abs: https://t.co/mHP3ZYCYhi
colab: https://t.co/igsVWKntOq pic.twitter.com/uEANOUp5E6
Single Image Texture Translation for Data Augmentation
— AK (@ak92501) June 28, 2021
pdf: https://t.co/pAWU2Kn0PL
abs: https://t.co/krjNbyN8em
project page: https://t.co/h1xKObA9Do pic.twitter.com/8fya7pYEh8
Revisiting Deep Learning Models for Tabular Data
— AK (@ak92501) June 23, 2021
pdf: https://t.co/G4J8TRyRBt
abs: https://t.co/tMUhp2IZwW
github: https://t.co/qeue47mnWD
proposed a attention-based architecture that outperforms ResNet on many tasks pic.twitter.com/aBCZ3dD4VV
"Paraphrastic representations at scale" is a strong, blazing fast package for sentence embeddings by @johnwieting2
— Graham Neubig (@gneubig) June 22, 2021
Paper: https://t.co/LlHbH788b2
Code: https://t.co/91XR75PDG9
Beats Sentence-BERT, LASER, USE on STS tasks, works multilingually, and is up to 6,000 times faster 😯 pic.twitter.com/julWhnjyYf
We're open-sourcing XCiT, a new Transformer-based #computervision model with linear (not quadratic) complexity. XCiT, created in partnership w/ @inria researchers, processes high-res images extremely efficiently & delivers strong performance. Code & models https://t.co/7aRHfNxOp6 pic.twitter.com/ZsYgEmfM7R
— Facebook AI (@facebookai) June 18, 2021
HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by @facebookai in @PyTorch on @Gradio Hub using @huggingface models
— AK (@ak92501) June 16, 2021
paper: https://t.co/OzIO3tL351
github: https://t.co/g691cqN7Fb
gradio demo: https://t.co/46qkxR59n4 pic.twitter.com/G79R9z2X6X
Keep CALM and Improve Visual Feature Attribution
— AK (@ak92501) June 16, 2021
pdf: https://t.co/I90yEPbjse
abs: https://t.co/JKIlsnNnGn
github: https://t.co/MP1F8LXvY0
identifies discriminative attributes for image classifiers more accurately than CAM and other visual attribution baselines pic.twitter.com/50DBMoVYai