Convolutional Networks with Adaptive Inference Graphs (ConvNet-AIG) https://t.co/SKnHTpIzwi #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) October 2, 2018
Convolutional Networks with Adaptive Inference Graphs (ConvNet-AIG) https://t.co/SKnHTpIzwi #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) October 2, 2018
Major thanks to @timhwang & the Ethics & Governance in AI Initiative for the opportunity to collaborate w @mrtz to explore (and hopefully push past) the limits of supervised ML as a framework for addressing fair & explainable algorithms https://t.co/BI6KWWVKQf
— Zachary Lipton (@zacharylipton) October 1, 2018
Fashion-MNIST: Year In Review. Making datasets in the exact same format as MNIST encourage lazy researchers to use (and cite) your dataset! https://t.co/PFnIqxkuyP pic.twitter.com/CyuokfsKTA
— hardmaru (@hardmaru) October 1, 2018
Sanity Disclaimer: As you stare at the continuous stream of ICLR and arXiv papers, don't lose confidence or feel overwhelmed. This isn't a competition, it's a search for knowledge. You and your work are valuable and help carve out the path for progress in our field :)
— Smerity (@Smerity) October 1, 2018
#recsys2018 accepted papers with open access to PDFs are now available here: https://t.co/YGV44MJT0k #recsys
— Xavier @ #recsys2018🎗🤖🏃 (@xamat) October 1, 2018
The Sound of Pixels
— ML Review (@ml_review) October 1, 2018
By @zhaohang0124 @arouditchenko
By watching large amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that represents the sound from each pixelhttps://t.co/MdYAeCL4tz pic.twitter.com/R5jqhfhVER
We have fun with WGAN in https://t.co/aQsW5afov6, and I challenge students to try to get something working that's even better. Some nice results here from a student using SAGAN+D2GAN+SNGAN, showing a "woman->man" latent space :)https://t.co/2BjWF4HH01 pic.twitter.com/JdBgj9zFh5
— Jeremy Howard (@jeremyphoward) September 30, 2018
If you made a big neural net or dataset recently, let me know the # of examples / “neurons” / average # connections per neuron. I want to update my graphs of how these change over time https://t.co/Nf523Nkrsq
— Ian Goodfellow (@goodfellow_ian) September 30, 2018
Looks to be related paper in ICLR2019 submissions
— Soumith Chintala (@soumithchintala) September 30, 2018
"The Unreasonable Effectiveness of (Zero) Initialization in Deep Residual Learning"https://t.co/wKCzdIGC9h
The "Imagenet in 1 hour" paper from @priy2201 is overflowing with wonderful little tricks. Here's one of my favorites - something that many researchers are still not aware of, but really makes model training easierhttps://t.co/uf8TkpCkIQ pic.twitter.com/Nar2NuO3Po
— Jeremy Howard (@jeremyphoward) September 30, 2018
Wow, these are incredible.
— Mikel Bober-Irizar (@mikb0b) September 29, 2018
Let's see how cherry picked they are :)
A huge fan of putting this in papers (from the BigGAN paper https://t.co/HCcLIPMSZ4) pic.twitter.com/1IxLJZML4Q
— James Bradbury (@jekbradbury) September 29, 2018