Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by hardmaru on 2018-09-15 (UTC).

If you like char-rnn, try char-transformer: https://t.co/meYyzEWz93

— hardmaru (@hardmaru) September 15, 2018
researchnlp
by antor on 2018-09-15 (UTC).

Lazy man's version: https://t.co/eZdohNWP1l

— Andres Torrubia (@antor) September 15, 2018
nlpw_codetool
by Make3333 on 2018-09-15 (UTC).

@sleepinyourhat 's VAE is actually really strong https://t.co/EPJ0CqH5fa

— Marc Rosenberg (@Make3333) September 15, 2018
nlpresearch
by sleepinyourhat on 2018-09-15 (UTC).

Thanks, but I wouldn't agree—VAEs are a very interesting model to work with, and they show up as a piece of cool larger models, but our paper is a pretty robust/reproducible _negative_ result. Plain VAEs are no better than plain LSTM language models by any standard measure.

— Sam Bowman (@sleepinyourhat) September 15, 2018
researchnlp
by sleepinyourhat on 2018-09-15 (UTC).

For unconditional generation (producing good samples/likelihoods), these two papers are worth knowing about:
– https://t.co/skhHFkfcQ4
– https://t.co/98VW8asmTp

— Sam Bowman (@sleepinyourhat) September 15, 2018
researchnlp

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib