Tweeted By @prfsanjeevarora
Conventional wisdom: "Not enough data? Use classic learners (Random Forests, RBF SVM, ..), not deep nets." New paper: infinitely wide nets beat these and also beat finite nets. Infinite nets train faster than finite nets here (hint: Neural Tangent Kernel)! https://t.co/2qrGyyvCiI
— Sanjeev Arora (@prfsanjeevarora) October 7, 2019