Tweeted By @ak92501
BEIT: BERT Pre-Training of Image Transformers
— AK (@ak92501) June 16, 2021
pdf: https://t.co/WiFZIiErLt
abs: https://t.co/Ld2067ltiV
large-size BEIT obtains 86.3% only using ImageNet-1K, even outperforming ViT-L with supervised
pre-training on ImageNet-22K (85.2%) pic.twitter.com/abMaWZ1aZ8