Tweeted By @deliprao
“Rethinking Imagenet Pretraining” from FAIR
— Delip Rao (@deliprao) November 22, 2018
TLDR is random init is just as good as “pretrain+fine tune” if you have enough compute and data. A result we already knew but the paper offers through experiments to convince that.
https://t.co/iNyIHwffMk