VRT: A Video Restoration Transformer
— AK (@ak92501) January 31, 2022
abs: https://t.co/Fzxk3gdL8K
github: https://t.co/ILBcaKPogC pic.twitter.com/ONK2GBENck
VRT: A Video Restoration Transformer
— AK (@ak92501) January 31, 2022
abs: https://t.co/Fzxk3gdL8K
github: https://t.co/ILBcaKPogC pic.twitter.com/ONK2GBENck
Training Vision Transformers with Only 2040 Images
— AK (@ak92501) January 27, 2022
abs: https://t.co/F1IH07RCzy pic.twitter.com/o21cELiURr
CodeRetriever: Unimodal and Bimodal Contrastive Learning for Code Search
— AK (@ak92501) January 27, 2022
abs: https://t.co/lrX5FKqtSc
By fine-tuning with domain/language specified downstream data, CodeRetriever achieves the new sota performance with significant improvement over existing code pre-trained models pic.twitter.com/ZJDAxkljrQ
Implementation of Denoising Diffusion Probabilistic Model in Pytorch https://t.co/xfNQaLjVcK #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #PyTorch
— PyTorch Best Practices (@PyTorchPractice) January 25, 2022
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
— AK (@ak92501) January 25, 2022
abs: https://t.co/zDuZTlNudr pic.twitter.com/FyDZUUXF1x
Learning from One and Only One Shot
— AK (@ak92501) January 24, 2022
abs: https://t.co/6tyuja6oMK pic.twitter.com/4IAsQXkuIQ
New evidence from @Google that size isn't everything: "While model scaling alone can improve quality, it shows less improvements on safety and factual grounding"
— Gary Marcus (@GaryMarcus) January 21, 2022
Translation: making GPT-3-like models bigger makes them more fluent, but no more trustworthy. https://t.co/BUjQCP6ReZ
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
— AK (@ak92501) January 20, 2022
blog: https://t.co/KP0OfddN3O
github: https://t.co/aNp78Sh48v pic.twitter.com/sA02jQpAt1
New work! Humans appear to learn similarly for different modalities and so should machines! data2vec uses the same self-supervised algorithm to train models for vision, speech, and nlp.
— Michael Auli (@MichaelAuli) January 20, 2022
Paper: https://t.co/bfcfP3dO1C
Blog: https://t.co/vi52LniOfv
Code: https://t.co/EVODCxOehP pic.twitter.com/98x0UxA44O
A Thousand Words Are Worth More Than a Picture:
— AK (@ak92501) January 17, 2022
Natural Language-Centric Outside-Knowledge Visual Question Answering
abs: https://t.co/pmciI9NUSg
TRiG framework outperforms all sota supervised methods by at least 11.1% absolute margin pic.twitter.com/CZkWbL4CPO
GradMax: Growing Neural Networks using Gradient Information
— AK (@ak92501) January 14, 2022
abs: https://t.co/IyAuQffXcS
a new approach for growing neural networks that aims to maximize the gradient norm when growing pic.twitter.com/hDml0jMCbP
BigDatasetGAN: Synthesizing ImageNet with Pixel-wise Annotations
— AK (@ak92501) January 14, 2022
abs: https://t.co/bZ22mqHXEL
project page: https://t.co/hWQaDqkANr pic.twitter.com/SRH7XIIys1