My advice to young scientists looking for blue ocean: pay attention to the lazy intuition pumps everyone assumes to be obviously true. Historically, there is usually gold there!
— Micah Allen (@micahgallen) August 29, 2019
My advice to young scientists looking for blue ocean: pay attention to the lazy intuition pumps everyone assumes to be obviously true. Historically, there is usually gold there!
— Micah Allen (@micahgallen) August 29, 2019
More ambitious problems (causality, robustness, extrapolation beyond support) don't provide the safety net (validation set) that brute force requires (no after-the-fact-guarantees). Suddenly, just as we've accepted empiricism, we find ourselves needing foundational theory (6/n)
— Zachary Lipton (@zacharylipton) August 24, 2019
Part 1 TLDR. For supervised learning, the "throw spaghetti at the wall model" works. And the preoccupation with using theoretically-guaranteed methods was arguably misplaced. If anything theory was more useful for insight, not guarantees you'd ever actually use. (5/n)
— Zachary Lipton (@zacharylipton) August 24, 2019
Arguably AI/ML research was held up pre-deep learning by the mistaken belief that empirical questions required foundational answers. Now, as AI researchers look beyond supervised learning (causality, robustness, distribution shift) the folly’s turned converse. #feelthelearn
— Zachary Lipton (@zacharylipton) August 24, 2019
The idea that targeted ads don't actually work very well, so all that data collection and personalization is for nothing, is pretty revolutionary. I'd like to see more discussion about that factual claim and its policy consequences. https://t.co/X89hsYd8Dy
— Daphne Keller (@daphnehk) August 21, 2019
FWIW I’ve never met an actual brilliant jerk.
— Angela Bassa (@AngeBassa) August 20, 2019
I’ve certainly met people who thought highly of themselves and believed that excused them from being empathetic and thoughtful in their interactions and communications.
That made them ineffective and incompetent, so... pic.twitter.com/F2flE4VDf9
Poor meeting AV has probably destroyed more shareholder value than actual poor decision making. No wonder Zoom is worth so much!
— Hilary Mason (@hmason) August 20, 2019
If it doesn't work for everyone, it doesn't actually work at all.@rajiinio pic.twitter.com/C0Ra0ZfvOe
— Rachel Thomas (@math_rachel) August 20, 2019
You don’t. You make money by being useful. AI is just an ingredient. People confuse the ingredient for the main course.
— Delip Rao (@deliprao) August 20, 2019
I wrote the second edition of R Cookbook faster than editors expected because each recipe was a small tractable task. I wrote an hour most morns before family awoke. Having small well defined tasks is a huge productivity hack. https://t.co/Mk0flpZVuT
— JD Long (@CMastication) August 17, 2019
The solution is not to stop building AI. Instead, we need public accountability and oversight of AI development, both to ensure the privacy and rights of users, and the ethical treatment of the *workers* who generate training data.
— Arvind Narayanan (@random_walker) August 13, 2019
The fake-it-till-you-make-it culture of Silicon Valley has no place in domains where people's lives are at stake. How many repeats of Theranos are we going to have? https://t.co/QoxqleWBSy
— Arvind Narayanan (@random_walker) August 9, 2019
HT @Aaroth. pic.twitter.com/Zk8ZvP9w73