If you like char-rnn, try char-transformer: https://t.co/meYyzEWz93
— hardmaru (@hardmaru) September 15, 2018
If you like char-rnn, try char-transformer: https://t.co/meYyzEWz93
— hardmaru (@hardmaru) September 15, 2018
Lazy man's version: https://t.co/eZdohNWP1l
— Andres Torrubia (@antor) September 15, 2018
@sleepinyourhat 's VAE is actually really strong https://t.co/EPJ0CqH5fa
— Marc Rosenberg (@Make3333) September 15, 2018
Thanks, but I wouldn't agree—VAEs are a very interesting model to work with, and they show up as a piece of cool larger models, but our paper is a pretty robust/reproducible _negative_ result. Plain VAEs are no better than plain LSTM language models by any standard measure.
— Sam Bowman (@sleepinyourhat) September 15, 2018
For unconditional generation (producing good samples/likelihoods), these two papers are worth knowing about:
— Sam Bowman (@sleepinyourhat) September 15, 2018
– https://t.co/skhHFkfcQ4
– https://t.co/98VW8asmTp