Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by JanelleCShane on 2018-09-15 (UTC).

I fed a neural net 270,000 words of Trump campaign speeches for the @NewYorker and nobody can ever make me do it again. https://t.co/uY5OQuSSK7

— Janelle Shane (@JanelleCShane) September 15, 2018
nlp
by JanelleCShane on 2018-09-15 (UTC).

Methods: Individual sentences (and sometimes groups of sentences) are exactly as the neural net output them. I did select the most interesting sentences, though, and arrange them in an order that (if not exactly making sense) at least flowed better.

— Janelle Shane (@JanelleCShane) September 15, 2018
nlp
by JanelleCShane on 2018-09-15 (UTC).

Neural nets used:
Writes text letter-by-letter: https://t.co/p1SEkv9nGb
Writes text word-by-word: https://t.co/3OICLYIcFa
The outputs with nonsense words were mostly from the letter-by-letter neural net.

— Janelle Shane (@JanelleCShane) September 15, 2018
nlpw_code
by JanelleCShane on 2018-09-15 (UTC).

What the creativity level means: This is the "temperature" setting. At the lowest temperature the neural net always writes the most likely next word/letter, and everything becomes "the the the". At the highest temperature, it chooses less probable words/letters for more weirdness

— Janelle Shane (@JanelleCShane) September 15, 2018
nlp
by JanelleCShane on 2018-09-15 (UTC).

That's in contrast to this neural net (not mine) which trained for over a month on 82 million Amazon reviews. https://t.co/siYiWzzx3p

— Janelle Shane (@JanelleCShane) September 15, 2018
nlp

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib