Deep Exemplar-based Colorization
— ML Review (@ml_review) September 4, 2018
The first end-to-end deep learning approach to controllable colorisation.
ArXivhttps://t.co/XxcQlcIV3c
Githubhttps://t.co/DwEwtp3469 pic.twitter.com/znDMZcnB1L
Deep Exemplar-based Colorization
— ML Review (@ml_review) September 4, 2018
The first end-to-end deep learning approach to controllable colorisation.
ArXivhttps://t.co/XxcQlcIV3c
Githubhttps://t.co/DwEwtp3469 pic.twitter.com/znDMZcnB1L
MIT Technology Review article on Everybody Dance Now: https://t.co/CppofyBcrp
— Ian Goodfellow (@goodfellow_ian) September 3, 2018
There does not seem to be a lot of information available on training large detection models
— Radek Osmulski (@radekosmulski) September 1, 2018
Here is an overview of how I trained yolov3 with SPP on the Open Images dataset: https://t.co/lMVjzM3VOy
I am also sharing the config files and trained weights: https://t.co/KaiSkl9DtR
This is impressive work in unsupervised machine translation, which opens the door to translating into languages out of reach to current approaches due to insufficient multi-language corpus data: https://t.co/2S9siQfGMI
— Hilary Mason (@hmason) September 1, 2018
This curated list of deep learning papers refers to 2012-2016 as “classic” and before that as “old”: https://t.co/Y16ph075TZ Not sure what they would think of Laplace’s foundational work in the 1700s
— Ian Goodfellow (@goodfellow_ian) September 1, 2018
Exciting work from a collaboration between @GoogleAI and @Harvard in @Nature on using deep learning to accurately predict earthquake aftershock locations, and also on how interpretability of the underlying model helps gain new insights into the underlying physics. https://t.co/ngnMrcOLve
— Jeff Dean (@JeffDean) August 31, 2018
https://t.co/46MisEZT5n Deep learning for predicting aftershocks of large earthquakes. Besides offering better predictions, interpretations of the model suggest promising directions for new physical theories pic.twitter.com/b4bAvo6YZG
— Ian Goodfellow (@goodfellow_ian) August 30, 2018
new preprint: FPGA Implementation of Convolutional Neural Networks with Fixed-Point Calculations https://t.co/dT9o9YZKIS#Keras & #Verilog code: https://t.co/40O5NlvsU1
— Alexandr Kalinin (@alxndrkalinin) August 30, 2018
Real-time handwritten digit recognition demo: https://t.co/QHElSr057j pic.twitter.com/MgnXutRQ9f
Revisiting Character-Based Neural Machine Translation with Capacity and Compression, from @GoogleAI. “We show that deep models operating at the character level outperform identical models operating over word fragments.” https://t.co/RR7GOI2ku4
— hardmaru (@hardmaru) August 30, 2018
SOLAR: Model-based deep RL with hierarchical Bayesian models learns to stack Lego blocks from 84x84 images with under an hour of real-world training: https://t.co/pnofrXk7iP
— Sergey Levine (@svlevine) August 30, 2018
with M. Zhang, S. Vikram, L. Smith, @pabbeel, @SingularMattrixhttps://t.co/pS2yqzby8n
"Wasserstein is all you need," Singh et al.: https://t.co/xpMs830QkV
— Miles Brundage (@Miles_Brundage) August 30, 2018
"Evaluating Theory of Mind in Question Answering," Nematzadeh et al.: https://t.co/vuA3bmZ7YM
— Miles Brundage (@Miles_Brundage) August 29, 2018