Lit2Vec: representing books as vectors using word2vec. Built on goodbooks-10k dataset. Lots of TSNE book maps. https://t.co/gIim1bJWR5
— fastml extra (@fastml_extra) September 23, 2018
Lit2Vec: representing books as vectors using word2vec. Built on goodbooks-10k dataset. Lots of TSNE book maps. https://t.co/gIim1bJWR5
— fastml extra (@fastml_extra) September 23, 2018
Totally missed posting the link: https://t.co/YjXZewMkrJ
— Sebastian Ruder (@seb_ruder) September 19, 2018
👍 code-through…
— Mara Averick (@dataandme) September 18, 2018
"Explaining Black-Box ML Models - Pt 2: Text classification with LIME" 👩💻 @ShirinGlanderhttps://t.co/fzgyqRTkEV #rstats #keras pic.twitter.com/TW4e96Tjeq
New NLP Newsletter: Deep Learning Indaba 2018 edition 🌍Impressions and highlights from the #DLIndaba2018 https://t.co/frmdNS0p0X (via @revue)
— Sebastian Ruder (@seb_ruder) September 17, 2018
For unconditional generation (producing good samples/likelihoods), these two papers are worth knowing about:
— Sam Bowman (@sleepinyourhat) September 15, 2018
– https://t.co/skhHFkfcQ4
– https://t.co/98VW8asmTp
Thanks, but I wouldn't agree—VAEs are a very interesting model to work with, and they show up as a piece of cool larger models, but our paper is a pretty robust/reproducible _negative_ result. Plain VAEs are no better than plain LSTM language models by any standard measure.
— Sam Bowman (@sleepinyourhat) September 15, 2018
@sleepinyourhat 's VAE is actually really strong https://t.co/EPJ0CqH5fa
— Marc Rosenberg (@Make3333) September 15, 2018
Lazy man's version: https://t.co/eZdohNWP1l
— Andres Torrubia (@antor) September 15, 2018
If you like char-rnn, try char-transformer: https://t.co/meYyzEWz93
— hardmaru (@hardmaru) September 15, 2018
That's in contrast to this neural net (not mine) which trained for over a month on 82 million Amazon reviews. https://t.co/siYiWzzx3p
— Janelle Shane (@JanelleCShane) September 15, 2018
What the creativity level means: This is the "temperature" setting. At the lowest temperature the neural net always writes the most likely next word/letter, and everything becomes "the the the". At the highest temperature, it chooses less probable words/letters for more weirdness
— Janelle Shane (@JanelleCShane) September 15, 2018
Neural nets used:
— Janelle Shane (@JanelleCShane) September 15, 2018
Writes text letter-by-letter: https://t.co/p1SEkv9nGb
Writes text word-by-word: https://t.co/3OICLYIcFa
The outputs with nonsense words were mostly from the letter-by-letter neural net.