Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by deliprao on 2019-02-28 (UTC).

So many alternatives emerging to the withheld WebText corpus referenced in the GPT-2 paper. Start with this one and build datasets bigger and bolder than WebText. Just a few more examples ... https://t.co/SVO35pY1iR

— Delip Rao (@deliprao) February 28, 2019
datasetnlp
by etzioni on 2019-03-05 (UTC).

NEW: Why did your model come with a "no-fly zone" warning?
Interactively explore @openai's GPT-2 model to find this and other gems at: https://t.co/DaMoEHMtYK

— Oren Etzioni (@etzioni) March 5, 2019
nlptool
by togelius on 2019-03-10 (UTC).

This is actually an argument _for_ releasing language models like the GPT-2. The sooner everyone understands that we can now generate surface-level correct and coherent text, but not process the text semantically, the sooner we'll stop stupid ideas like autograding essays. https://t.co/gPlGRQnJuW

— Julian Togelius (@togelius) March 10, 2019
nlp
by Miles_Brundage on 2019-03-16 (UTC).

Really impressed with these early fine-tuning results with GPT-2-117M, and great to see that there's also a Colab from @roadrunning01! Giving it a try this weekend :) https://t.co/sVJeZOI8es

— Miles Brundage (@Miles_Brundage) March 16, 2019
nlpapplication
by Miles_Brundage on 2019-03-16 (UTC).

Some GPT-2-117M outputs after fine-tuning on memorable movie quotes from (https://t.co/vZyr8mjNki), using @roadrunning01's Colab. The format of the training data was: [movie title] [line break] [quote] [sometimes another quotes], sometimes with "<p>"/"</p>" thrown in the mix. pic.twitter.com/CjnoAr8L7t

— Miles Brundage (@Miles_Brundage) March 16, 2019
nlpapplicationw_codedataset
by Miles_Brundage on 2019-03-17 (UTC).

Next and possibly final GPT-2-117M fine-tuning run of the weekend - quotations from this dataset: https://t.co/yOgEkaSsYe

— Miles Brundage (@Miles_Brundage) March 17, 2019
nlpapplicationdataset
by Miles_Brundage on 2019-03-17 (UTC).

Reminder that it's super easy to use this Colab by following the instructions at the top, running the cells in order, and changing the dataset link and filename to a different .txt file :) https://t.co/mkyFTJ5V8Z

— Miles Brundage (@Miles_Brundage) March 17, 2019
w_codenlpapplication
by ilyasut on 2019-03-20 (UTC).

https://t.co/9IHozCXzzc --- r/ML gets the small GPT2 to write imaginary news. I didn't expect the small model to be that coherent.

— Ilya Sutskever (@ilyasut) March 20, 2019
nlpapplication
by Miles_Brundage on 2019-04-08 (UTC).

Great talk by @ilyasut on GPT-2: https://t.co/IfouULNF9e

— Miles Brundage (@Miles_Brundage) April 8, 2019
videolearningnlp
by deliprao on 2019-05-04 (UTC).

I like this GPT-2 post update: Data release for detection research, Bigger (117M -> 345M) model release for your creative works. Nice follow up from @OpenAI! https://t.co/kGKmk9XymY

— Delip Rao (@deliprao) May 4, 2019
nlpdataset
by ilyasut on 2019-05-08 (UTC).

Things transformers say: https://t.co/L3hgSz5Wfe

— Ilya Sutskever (@ilyasut) May 8, 2019
nlp
by slashML on 2019-06-08 (UTC).

Student with access to TPU credits reproduced GPT2-1.5B and plan to release model https://t.co/WJUEJSEKE8

— /MachineLearning (@slashML) June 8, 2019
nlp
  • Prev
  • 1
  • 2
  • 3
  • 4
  • Next

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib