Tweeted By @hardmaru
“Experimental results on the PTB, WikiText-2, and WikiText-103 show that our method achieves perplexities between 20 and 34 on all problems, i.e. on average an improvement of 12 perplexity units compared to state-of-the-art LSTMs.” 🔥 https://t.co/VB7C0KdRp8
— hardmaru (@hardmaru) April 23, 2019