Code to reproduce “Learning to Paint with Model-based Deep Reinforcement Learning”https://t.co/BLljpZyv0V pic.twitter.com/MCOh3nbsl1
— hardmaru (@hardmaru) April 25, 2019
Code to reproduce “Learning to Paint with Model-based Deep Reinforcement Learning”https://t.co/BLljpZyv0V pic.twitter.com/MCOh3nbsl1
— hardmaru (@hardmaru) April 25, 2019
Pytorch implementation of Octave convolution https://t.co/6Hygcy8A0W #pytorch #deeplearning #neuralnetwork
— PyTorch Best Practices (@PyTorchPractice) April 24, 2019
It's good to see that you don't always need to use ImageNet to do interesting research, when you can use MNIST, Fashion MNIST, K-MNIST, etc., in more creative ways. The GitHub repo to reproduce their experiments: https://t.co/3A8I60gnn6
— hardmaru (@hardmaru) April 24, 2019
Releasing the Sparse Transformer, a network which sets records at predicting what comes next in a sequence — whether text, images, or sound. Improvements to neural 'attention' let it extract patterns from sequences 30x longer than possible previously: https://t.co/FZlDEPsi1A pic.twitter.com/1cn1PO2nJX
— OpenAI (@OpenAI) April 23, 2019
“Experimental results on the PTB, WikiText-2, and WikiText-103 show that our method achieves perplexities between 20 and 34 on all problems, i.e. on average an improvement of 12 perplexity units compared to state-of-the-art LSTMs.” 🔥 https://t.co/VB7C0KdRp8
— hardmaru (@hardmaru) April 23, 2019
Transformer/LSTM hybrids!!
— Thomas Lahore (@evolvingstuff) April 23, 2019
Language Models with Transformers
"we explore effective Transformer architectures for language model, including adding additional LSTM layers to better capture the sequential context while still keeping computation efficient"https://t.co/KVWjpsACwO pic.twitter.com/4B96N4Sa57
Exciting new work by Park, Chan, et al. to improve ASR models with data augmentation.
— Jeff Dean (@JeffDean) April 22, 2019
"Instead of augmenting the input audio waveform as is traditionally done, SpecAugment applies an augmentation policy directly to the audio spectrogram." https://t.co/Pluccv89RR
SpecAugment is fascinating new data augmentation approach from @GoogleAI for ASR. I tried messing around with spectrograms for data augmentation before but kudos to google folks for actually making it work!https://t.co/OhPTcenpIK pic.twitter.com/DRn2tcYTSx
— Delip Rao (@deliprao) April 22, 2019
Automatic Speech Recognition (ASR) struggles in the absence of an extensive volume of training data. We present SpecAugment, a new approach to augmenting audio data that treats it as a visual problem rather than an audio one. Learn more at → https://t.co/zeFIPFQsp1 pic.twitter.com/8t79Uc7uMU
— Google AI (@GoogleAI) April 22, 2019
Fun fact: If a model has learnt the correct causal structure, it requires fewer iterations to generalize to new domains. Turns out this simple idea can lead to a new class of causal discovery algorithms! Amazing work by Yoshua Bengio,@TristanDeleu et al https://t.co/qv7E9HemzP
— Amit Sharma (@amt_shrma) April 22, 2019
"Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition" has been accepted to ICML 2019! Congratulations to @YaoQinUCSD, Nicholas, Gary, and @colinraffel ! https://t.co/lAwYLEkAx3
— Ian Goodfellow (@goodfellow_ian) April 22, 2019
TensorFuzz has been accepted to ICML 2019! Congratulations to @gstsdn ! https://t.co/2sBR7s64X6
— Ian Goodfellow (@goodfellow_ian) April 22, 2019