Nice collection of image-to-image papers, including various convnets/GANs etc. in supervised and unsupervised settings: https://t.co/OxfjChpKeW
— Sebastian Raschka (@rasbt) November 4, 2018
Nice collection of image-to-image papers, including various convnets/GANs etc. in supervised and unsupervised settings: https://t.co/OxfjChpKeW
— Sebastian Raschka (@rasbt) November 4, 2018
Machine learning and computational neuroscience share a common challenge - to analyse and understand neural network representations. We review opportunities for interdisciplinary exchange of ideas and analysis methods. @dgtbarrett @arimorcos @jakhmack https://t.co/08r5aWPMZq
— DeepMind (@DeepMindAI) November 2, 2018
We have released @TensorFlow code+models for BERT, a brand new pre-training technique which is now state-of-the-art on a wide array of natural language tasks. It can also be used on many new tasks with minimal changes and quick training! https://t.co/rLR6U7uiPj
— Google AI (@GoogleAI) November 2, 2018
A Keras-based project for developing motion planning algorithms for robots (grasping, stacking, etc.): https://t.co/Am6WMMP2ua
— François Chollet (@fchollet) November 1, 2018
"You may not need attention" by Press and Smith with PyTorch code at https://t.co/sqBZ1CFq2u https://t.co/2oxk8OORPt
— PyTorch (@PyTorch) November 1, 2018
Thanks for the interest! I posted my slides on "Learning with Latent Linguistic Structure" for the Blackbox NLP workshop #EMNLP2018 here: https://t.co/fbAdkyO8jy
— Graham Neubig (@gneubig) November 1, 2018
Thanks to co-authors Junxian He, @pengchengyin, @violet_zct, and @BergKirkpatrick! https://t.co/XxapRdDAOK
An agent which learned to play Mario without rewards. Instead, it was incentivized to avoid "boredom" (that is, getting into states where it can predict what will happen next). Discovered warp levels, how to defeat bosses, etc. More details: https://t.co/lGw3rZUbv3 pic.twitter.com/6ObS35iZZS
— Greg Brockman (@gdb) October 31, 2018
Random Network Distillation: A prediction-based method that achieves state-of-the-art performance on Montezuma’s Revenge -https://t.co/wR64G37cJN pic.twitter.com/rdMdE5tPGf
— OpenAI (@OpenAI) October 31, 2018
Code and pretrained weights for BERT are out now.
— Sebastian Ruder (@seb_ruder) October 31, 2018
Includes scripts to reproduce results. BERT-Base can be fine-tuned on a standard GPU; for BERT-Large, a Cloud TPU is required (as max batch size for 12-16 GB is too small).https://t.co/CWv8GMZiX5
"The Nuts and Bolts of Deep RL Research," John Schulman: https://t.co/PE4K3p3t9Z
— Miles Brundage (@Miles_Brundage) October 31, 2018
Lots of very useful tips here!
We released code for our new #NIPS2018 paper on Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs!https://t.co/OJ0n9QolkI
— Andrew Gordon Wilson (@andrewgwils) October 31, 2018
We show the local optima are connected by simple high accuracy curves, like wormholes. With @tim_garipov, @Pavel_Izmailov, Podoprikhin, Vetrov.
A Keras implementation of BERT -- a new transformer architecture with strong performance across a range of language tasks. https://t.co/OznxM3h51Y
— François Chollet (@fchollet) October 30, 2018