Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by AlecRad on 2019-02-11 (UTC).

The DL CV community is having a "oh wait, bags of local features are a really strong baseline for classification" moment with the BagNet paper.

This has always been clear for text classification due to n-gram baselines. It took an embarrassingly long time for nets to beat them.

— Alec Radford (@AlecRad) February 11, 2019
cvthoughtmisc
by AlecRad on 2019-02-11 (UTC).

We *are* as a field developing and training models that *are* using more context but where exactly where we are on that trend-line is a great question.

Keep in mind nets are lazy and if you can "solve" a task by doing something "basic" you'll only learn "basic" things.

— Alec Radford (@AlecRad) February 11, 2019
misc
by AlecRad on 2019-02-11 (UTC).

So nets are stubbornly, begrudgingly, moving in the right direction and we're throwing ever larger amounts of compute and data at them and praying it's enough for them to figure out how to do things "the right way".

Will that work?

Don't know. Probably still worth checking?

— Alec Radford (@AlecRad) February 11, 2019
misc
by deliprao on 2019-08-20 (UTC).

Today, OpenAI released GPT-2 774M (English) and Facebook released XLM pre-trained models for 100 languages. Looks like a glut of #NLProc resources for everyone freely accessible. What a wonderful time to live!https://t.co/1OMd3D5xDohttps://t.co/lyC2eKvp3J

— Delip Rao (@deliprao) August 20, 2019
nlptool

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib