When people who have never won a single @Kaggle competition tell me what skills I need to be good at Kaggling. pic.twitter.com/wXs0NSMIEi
— Bojan Tunguz (@tunguz) July 13, 2021
When people who have never won a single @Kaggle competition tell me what skills I need to be good at Kaggling. pic.twitter.com/wXs0NSMIEi
— Bojan Tunguz (@tunguz) July 13, 2021
There are three types of researchers: 1. those that only look at the paper's methods and ideas; 2. those that only look at experiments; and 3. those that no longer read papers.
— Dustin Tran (@dustinvtran) June 23, 2021
It’s important to identify the common elements of all the failed #datascience projects you’ve ever worked on so you can work on removing them.
— Nihilist Data Scientist (@nihilist_ds) June 20, 2021
Of course, the biggest one is that they all involved you. #rstats #pydata
ConSelfSTransDRLIB: Contrastive Self-supervised Transformers for Disentangled Representation Learning with Inductive Biases is All you need, and where to find them.
— Sebastian Raschka (@rasbt) March 13, 2021
The current state of deep learning research summarized in one sentence.
(Credit: https://t.co/RTuht7Lkj0)
"Doesn't use cookies."
— Randy Olson (@randal_olson) February 10, 2021
Clever. #webdesign #webdev
Source: https://t.co/sKAxoXpU5y pic.twitter.com/vE3uIPqCJO
Statistical terms: what they really mean
— Maarten van Smeden (@MaartenvSmeden) February 6, 2021
Multicolinearity— they all look the same
Heteroscedasticity— the variation varies
Attenuation— being too modest
Overfitting— too good to be true
Confounding— nothing is what it seems
P-value— it’s complicated
Who made this pic.twitter.com/212bG8vZf0
— Maarten van Smeden (@MaartenvSmeden) February 4, 2021
What do you call a machine learning model that perfectly predicts the training data, but does not work for unseen data?
— Christoph Molnar (@ChristophMolnar) February 1, 2021
A database
Targeted advertising for birds 🌀
— hardmaru (@hardmaru) January 31, 2021
We’re not so different, you and I 🐤pic.twitter.com/85xkwvJMVX
Three rules of thumb for machine learning:
— Thomas (@evolvingstuff) January 28, 2021
1) be skeptical of good results
2) be skeptical of bad results
3) be skeptical of mediocre results
Statisticians: We assume the data are independent.
— Christoph Molnar (@ChristophMolnar) January 25, 2021
The data: https://t.co/mBnT535WDa
Me at work: "Remember to use just enough complexity to get it done."
— Chris Albon (@chrisalbon) January 21, 2021
My side projects: pic.twitter.com/LXIdCMGoUu