Tweeted By @rctatman
Nguyen et al: really nice paper with a guiding principle for neural network architecture = at least one of your hidden layers should be wider than your input dimension or your decision region can't be disconnected (under reasonable assumptions) #icml2018 pic.twitter.com/QILCKaBSHz
— Rachael Tatman (@rctatman) July 12, 2018