"On Empirical Comparisons of Optimizers for Deep Learning" => "As tuning effort grows without bound, more general optimizers should never underperform the ones they can approximate" https://t.co/a76zy4kce3 pic.twitter.com/hUIGMyshkC
— Sebastian Raschka (@rasbt) October 16, 2019