Neural Painters: A learned differentiable constraint for generating brushstroke paintings. https://t.co/TkcMO3oNj8 pic.twitter.com/ab2RnSxnrP
— arxiv (@arxiv_org) April 19, 2019
Neural Painters: A learned differentiable constraint for generating brushstroke paintings. https://t.co/TkcMO3oNj8 pic.twitter.com/ab2RnSxnrP
— arxiv (@arxiv_org) April 19, 2019
My @Stanford lab just released the MRNet dataset of 1000+ annotated knee MRIs. Join our competition to develop and test your knee MRI interpreting deep learning model: https://t.co/1OwfEQuTsY @nbien2 @pranavrajpurkar @RobynLBall @jeremy_irvin16 @curtlanglotz @mattlungrenMD
— Andrew Ng (@AndrewYNg) April 12, 2019
Kaiming He's original residual network results in 2015 have not been reproduced, not even by Kaiming He himself. https://t.co/piSvPx9nDz
— /MachineLearning (@slashML) April 12, 2019
Up to 100% accuracy for cellular histo-pathology image classification… with Fast-AI and Kimia960… by Less Wright https://t.co/1KvqaJ6c1G
— Jeremy Howard (@jeremyphoward) April 5, 2019
Very nice, non-technical explanation of my NYU colleagues' work on using ConvNets for breast cancer detection in mammograms. https://t.co/FCtX5azeSS
— Yann LeCun (@ylecun) April 3, 2019
Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening https://t.co/CFq4JPrykn
— Kyunghyun Cho (@kchonyc) April 2, 2019
The Intuition behind Adversarial Attacks on Neural Networks
— ML Review (@ml_review) April 1, 2019
By @anant90
Trade-off between trainability and robustness to adversarial attacks:
Ease of optimisation has come at the cost of models that are easily misledhttps://t.co/cRaSJyj7q9 pic.twitter.com/pZtSaQQCNS
This is a fun application of the superres method from @fastdotai lesson 7 - turning line drawings into shaded pictures!https://t.co/wGDmgP1pih pic.twitter.com/r8kAFkvvPS
— Jeremy Howard (@jeremyphoward) March 24, 2019
Should be super easy to install & use (see the image above)
— Thomas Wolf (@Thom_Wolf) March 21, 2019
👉 pip install pytorch-pretrained-biggan
👉 https://t.co/L2RGZUWeIc
Implemented from the raw computational graph of the TF Hub module. Pretrained weights are the ones pre-trained by @ajmooch at DeepMind. Thanks Andrew!
I needed a good GAN to tweak for a CV+NLP project so here is a *pretrained* version of BigGAN in PyTorch
— Thomas Wolf (@Thom_Wolf) March 21, 2019
Sweet stuff:
-AFAIK code is not public yet so here is an op-for-op implem to read/tweak
-Checkpoints 2x smaller (no dead vars)
-Print images in terminal (icing on the cake)👇 pic.twitter.com/GbE7cDfVcX
"Smart, Deep Copy-Paste," Portenier et al.: https://t.co/AVO5b01Oad pic.twitter.com/4IPqZm2R8F
— Miles Brundage (@Miles_Brundage) March 19, 2019
Fun colorization project by @EmilWallner. His blog post about the project has some good pointers about how to productively read research papers and iterate on ideas: https://t.co/nTeP5GqMZn https://t.co/MLHMxzo2dC
— hardmaru (@hardmaru) March 17, 2019