Key Papers in Deep Reinforcement Learning curated by @OpenAI https://t.co/34WUV3DiKN
— hardmaru (@hardmaru) November 9, 2018
Key Papers in Deep Reinforcement Learning curated by @OpenAI https://t.co/34WUV3DiKN
— hardmaru (@hardmaru) November 9, 2018
Everything you wanted to know about the effects of batch size on neural net training behavior but were afraid to ask!
— Jeff Dean (@JeffDean) November 9, 2018
Measuring the Effects of Data Parallelism on Neural Network Training
Chris Shallue, Jaehoon Lee, Joe Antognini, @jaschasd, Roy Frostig, George Dahl @GoogleAI https://t.co/6TUbthAfai
"FloWaveNet : A Generative Flow for Raw Audio" from Seoul National University
— PyTorch (@PyTorch) November 9, 2018
Their research was scooped by a few days, yet, they spent the time to write and release a paper and code. Great spirit folks!
Paper: https://t.co/G7GIcld23q
Code: https://t.co/oQEUV5WqAx pic.twitter.com/Hx1SLCoG4P
Combining insights from Glow and WaveNet, WaveGlow is a flow-based network capable of generating fast, high-quality speech from mel-spectrograms without the need for auto-regression. Both code and paper are now published. Great work by @ctnzr's team at @NVIDIA! https://t.co/m8gB0yyK7C
— Chip Huyen (@chipro) November 8, 2018
If you do medical image analysis, use 3D ConvNets.
— Yann LeCun (@ylecun) November 8, 2018
Awesome work on knee MRI by NYU colleagues.
Congrats Kyunghyun Cho and team! https://t.co/cAyzBvVxsm
The paper has been a long time coming. To Edward users: apologies for the delays. Hopefully the implementation is as polished as can be.
— Dustin Tran (@dustinvtran) November 7, 2018
New post on my #EMNLP2018 Highlights: Inductive bias, cross-lingual learning, and morehttps://t.co/IrqcZaQ8rM
— Sebastian Ruder (@seb_ruder) November 7, 2018
Thank you, MILA, for articulating why most NLP researchers have been so frustrated by the wave of GANs-for-text papers: https://t.co/fd0EMGQjMZ
— Sam Bowman (@sleepinyourhat) November 7, 2018
Bayesian Action Decoder (https://t.co/tpnSgBPPA3): A new multi-agent RL method for learning to communicate via informative actions using ToM-like reasoning. Achieves the best known score for 2 players on the challenging #hanabigame
— DeepMind (@DeepMindAI) November 6, 2018
Efficient Metropolitan Traffic Prediction Based on Graph Recurrent Neural Networkhttps://t.co/42znggKYTu pic.twitter.com/0ivkmMfr5T
— Thomas Lahore (@evolvingstuff) November 6, 2018
Here is an op-for-op @PyTorch re-implementation of @GoogleAI's BERT model by @sanhestpasmoi, @timrault and I.
— Thomas Wolf (@Thom_Wolf) November 5, 2018
We made a script to load Google's pre-trained models and it performs about the same as the TF implementation in our tests (see the readme).
Enjoy!https://t.co/dChmNPGPKO
The multilingual (many languages, one encoder) version of @GoogleAI's BERT appears to be online! Happy to see results on our new XNLI cross-lingual transfer dataset, too!https://t.co/2YL9hSUb5j
— Sam Bowman (@sleepinyourhat) November 5, 2018