Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by DrLukeOR on 2019-09-19 (UTC).

#AI competitions don't produce useful models.

As promised, here is a new blogpost explaining why AI competitions never seem to lead to products, how you can overfit on a hold out test set, and why imagenet results since the mid-2010s are suspect.https://t.co/F9H5iicgVq pic.twitter.com/fDUzEYRPDb

— Luke Oakden-Rayner (@DrLukeOR) September 19, 2019
misc
by antor on 2019-09-19 (UTC).

The test set (stage 1) for this competition is 400k cases.

In the recent Molecular competition I do believe (+ want to) we contributed to science, truth that different top models a few decimals across may be luck... But it's different than your assertion of "not useful".

— Andres Torrubia (@antor) September 19, 2019
misc
by jeremyphoward on 2019-09-19 (UTC).

It would be fair to say in many cases the difference between 1st place and 10th place (for instance) isn't very important - but the insights from most competition results I've studied are fantastic

— Jeremy Howard (@jeremyphoward) September 19, 2019
miscthought
by xamat on 2019-09-19 (UTC).

I should clarify, there are a bunch of things that are obviously correct in the post. But, the headline/takeaway is not.

— Xavier Amatriain (@xamat) September 19, 2019
thought
by tunguz on 2019-09-20 (UTC).

In my experience most #ML competition results are far more reliable and reproducible then, oh, I don’t know, most of scientific research.https://t.co/pWBO6clJLP

— Bojan Tunguz (@tunguz) September 20, 2019
miscthought

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib