Tweeted By @hardmaru
At the same time, people also found that we can simply scale up evolution to solve for millions of weights of a modern neural net, without using backprop. This will be immensely useful if we want to use large nets for non-differentiable problems.
— hardmaru (@hardmaru) March 31, 2019
Deep GA: https://t.co/3jTmnubzdi pic.twitter.com/65kC47qQC4