In my experience most #ML competition results are far more reliable and reproducible then, oh, I don’t know, most of scientific research.https://t.co/pWBO6clJLP
— Bojan Tunguz (@tunguz) September 20, 2019
In my experience most #ML competition results are far more reliable and reproducible then, oh, I don’t know, most of scientific research.https://t.co/pWBO6clJLP
— Bojan Tunguz (@tunguz) September 20, 2019
Non-CS people: "How many lines of code do you write a day?"
— Chip Huyen (@chipro) September 20, 2019
Me: "Um idk minus 200?"
Can we take a moment to appreciate all the relentless deleters who are willing to dig into the mess that is our code and make it concise and readable? pic.twitter.com/0EWW7kv23R
This may or may not be a soft pseudo-science variant of the Lottery Ticket Hypothesis / @hardmaru et al's Weight Agnostic Neural Networks. Either way it has worked multiple times over multiple datasets for me and the results seem to generalize.
— Smerity (@Smerity) September 19, 2019
Deep learning training tip that I realized I do but never learned from anyone - when tweaking your model for improving gradient flow / speed to converge, keep the exact same random seed (hyperparameters and weight initializations) and only modify the model interactions.
— Smerity (@Smerity) September 19, 2019
It would be fair to say in many cases the difference between 1st place and 10th place (for instance) isn't very important - but the insights from most competition results I've studied are fantastic
— Jeremy Howard (@jeremyphoward) September 19, 2019
The test set (stage 1) for this competition is 400k cases.
— Andres Torrubia (@antor) September 19, 2019
In the recent Molecular competition I do believe (+ want to) we contributed to science, truth that different top models a few decimals across may be luck... But it's different than your assertion of "not useful".
#AI competitions don't produce useful models.
— Luke Oakden-Rayner (@DrLukeOR) September 19, 2019
As promised, here is a new blogpost explaining why AI competitions never seem to lead to products, how you can overfit on a hold out test set, and why imagenet results since the mid-2010s are suspect.https://t.co/F9H5iicgVq pic.twitter.com/fDUzEYRPDb
Stuart Russell: "Social media algorithms learned that the easiest way to predict people's clicks was not to become better at personalization, but to make people more predictable by radicalizing them" 🤯 https://t.co/pxZUZMG5oA
— Xavier Amatriain (@xamat) September 19, 2019
If your product and engineering organizations struggle with red tape and legal roadblocks, machine learning success probably isn't in your future.
— Peter Skomoroch (@peteskomoroch) September 19, 2019
Being a good leader means surrounding yourself with people such that at least one of them would tell you this is batshit crazy! https://t.co/Jhpg5sIxEj
— Mara Averick (@dataandme) September 18, 2019
This is an important legal case. The first I'm aware of that specifically calls out Facebook's optimization engine for discrimination rather than placing the blame on advertisers.https://t.co/zXYITnm3sw
— Cathy O'Neil (@mathbabedotorg) September 18, 2019
Recently @pandeyparul did an interview with me for our @h2oai blog. If you are interested in learning a bit more about my journey into data science, read about it here:https://t.co/npAgkIaF9h#ai #ml #datascience
— Bojan Tunguz (@tunguz) September 18, 2019