Tweeted By @ml_review
Sparse Networks from Scratch: Faster Training without Losing Performance
— ML Review (@ml_review) July 15, 2019
By @Tim_Dettmers @lukezettlemoyer
SoTA sparse performance on MNIST, CIFAR-10, and ImageNet
Increases training speed up to 11.85x
ArXivhttps://t.co/UFtSGjXqLC
Codehttps://t.co/88fHqSsVzN pic.twitter.com/1WCaiNhsVQ