Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by AlecRad on 2018-06-11 (UTC).

What I've been working on for the past year! https://t.co/CAQMYS1rR7

Inspired by CoVE, ELMo, and ULMFiT we show that a single transformer language model can be finetuned to a wide variety of NLP tasks and performs very well with little tuning/tweaking.

— Alec Radford (@AlecRad) June 11, 2018
nlp
by jeremyphoward on 2018-06-11 (UTC).

This is exactly where we were hoping our ULMFit work would head - really great work from @OpenAI! 😊

If you're doing NLP and haven't tried language model transfer learning yet, then jump in now, because it's a Really Big Deal. https://t.co/0Dj8ChCxvu

— Jeremy Howard (@jeremyphoward) June 11, 2018
nlp
by Smerity on 2018-06-11 (UTC).

Great work! Language models serving as the basis for transfer learning in task agnostic NLP is really going to feed the next generation of tools and transformations =]

— Smerity (@Smerity) June 11, 2018
nlp
by Thom_Wolf on 2018-06-14 (UTC).

I made a @pytorch implementation of @openai's pretrained transformer with a script to import OpenAI's pre-trained model.

Link: https://t.co/6zY8NavPA3

Thanks @AlecRad, @karthik_r_n, @TimSalimans, @ilyasut for open-sourcing the code right away!

— Thomas Wolf (@Thom_Wolf) June 14, 2018
nlp

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib