I suspect that peer review *actually causes* rather than mitigates many of the “troubling trends” recently identified by @zacharylipton and Jacob Steinhardt: https://t.co/9lduzgP5bf
— Ian Goodfellow (@goodfellow_ian) July 29, 2018
I suspect that peer review *actually causes* rather than mitigates many of the “troubling trends” recently identified by @zacharylipton and Jacob Steinhardt: https://t.co/9lduzgP5bf
— Ian Goodfellow (@goodfellow_ian) July 29, 2018
Reviewers often see papers that use empirical observations to understand how a system works, and respond with complaints that there is no new algorithm. This is easy to address by throwing a practically irrelevant new method into the paper.
— Ian Goodfellow (@goodfellow_ian) July 29, 2018
Reviewers seem to hate “science” papers, but it’s possible to sneak science in the door if add some token amount of new method engineering
— Ian Goodfellow (@goodfellow_ian) July 29, 2018
If we look back on many of our standard methods in ML (RNNs, dropout, word vectors, residual networks, ...) almost all had underlying theory and explanation that was both motivated by and clarified due to clear experimental science. Insane this is still not recognized.
— Smerity (@Smerity) July 30, 2018