SimSwap: An Efficient Framework For High Fidelity
β AK (@ak92501) June 14, 2021
Face Swapping
pdf: https://t.co/l2aWTrM1CP
abs: https://t.co/ZSuDnRLUuF
github: https://t.co/deYKr8rhLY pic.twitter.com/cBuaXySkd9
SimSwap: An Efficient Framework For High Fidelity
β AK (@ak92501) June 14, 2021
Face Swapping
pdf: https://t.co/l2aWTrM1CP
abs: https://t.co/ZSuDnRLUuF
github: https://t.co/deYKr8rhLY pic.twitter.com/cBuaXySkd9
Self-Damaging Contrastive Learning
β AK (@ak92501) June 8, 2021
pdf: https://t.co/wHoD6UJVoT
abs: https://t.co/cBy0btgmR0
github: https://t.co/AzLq4utoiX pic.twitter.com/DnutaOKrje
π And merged to Transformers!
β Hugging Face (@huggingface) June 1, 2021
We are excited to welcome ByT5 as the first tokenizer-free model!
πAll available checkpoints can be accessed on the π€hub here: https://t.co/WEpfzu6uMN
π Demo (on master): https://t.co/2qgmt7YQO8 pic.twitter.com/m5pXe9qETU
True Few-Shot Learning with Language Models
β AK (@ak92501) May 25, 2021
pdf: https://t.co/a0WvisxlJX
abs: https://t.co/7zb714iSYk
github: https://t.co/nPICjpoRuo
few-shot ability of LMs when held-out examples are unavailable, prior work significantly overestimated the true few-shot ability of LMs pic.twitter.com/k28CCi4iWk
Today we are announcing our work on building speech recognition models without any labeled data! wav2vec-U rivals some of the best supervised systems from only two years ago.
β Michael Auli (@MichaelAuli) May 21, 2021
Paper: https://t.co/cYzF9MGu56
Blog: https://t.co/iiGmgdnCiV
Code: https://t.co/TQ56tT0unx
CTRLsum: Towards Generic Controllable Text Summarization by @salesforce in @PyTorch on @Gradio
β AK (@ak92501) May 18, 2021
paper: https://t.co/gIsJNnpB3h
github: https://t.co/s20LBp99jj
gradio demo: https://t.co/PC9fv2j0iI pic.twitter.com/KjIH6nz5XS
Excited to share the open source of EfficientNetV2 code and modes.
β Mingxing Tan (@tanmingxing) May 14, 2021
Paper: https://t.co/YHWEb8pHmR(accepted to ICML'21)
Code: https://t.co/97gKzgYtf2
Bonus: amazing speed on both TPUs and V100/A100 GPUs with TensorRT. https://t.co/rPDTusJWzW
Self-Supervised Learning with Swin Transformers
β AK (@ak92501) May 11, 2021
pdf: https://t.co/uNAcu1JViD
abs: https://t.co/J70bvOq6nc
github: https://t.co/5dktODVxa9
a self-supervised learning approach called MoBY, with Vision Transformers as its backbone architecture pic.twitter.com/4yJm1AWHkh
RepVGG: Making VGG-style ConvNets Great Again
β Andrej Karpathy (@karpathy) April 11, 2021
paper: https://t.co/Y5WfgvqxHO
PyTorch code: https://t.co/ydk0RUf6JU
πSpells out the benefits of very simple/uniform/fast (latency, not FLOPS) deployment architectures. A lot of complexity often due to optimization, not architecture. pic.twitter.com/8GliE4JDiq
Interesting reproducibility study by @philipvollet to conclude that: ELECTRA small (14M) can be easily run on a single GPU with great performances! @clark_kev @quocleix @chrmanning
β Thang Luong (@lmthang) April 8, 2021
Reproducibility paper: https://t.co/UAe56B74EA
ELECTRA paper: https://t.co/1GW6scqTIv https://t.co/DoPGGMf3ro
Regularizing Generative Adversarial Networks under Limited Data
β AK (@ak92501) April 8, 2021
pdf: https://t.co/oBmu2v1yyp
abs: https://t.co/dRpZkvKnt4
github: https://t.co/OkV7RZ3xoC
training GAN models to achieve sota performance when only limited training data of the ImageNet benchmark is available pic.twitter.com/AQkCMH3J8g
@facebookai wav2vec2 using @huggingface transformers on @GradioML hub to transcribe audio files
β AK (@ak92501) April 3, 2021
demo: https://t.co/68r3PtvffQ
paper: https://t.co/mhL9hQJ1Sd
github: https://t.co/Q1r7oxlclv pic.twitter.com/42yd0ecEYm