Pro tip for coping with imposter syndrome:
— Brandon Rohrer (@_brohrer_) October 4, 2019
Remember no one knows what they’re doing.
Literally no one.
Pro tip for coping with imposter syndrome:
— Brandon Rohrer (@_brohrer_) October 4, 2019
Remember no one knows what they’re doing.
Literally no one.
Having an understanding of the social sciences is more important than the “hard” sciences for advancing the state of machine learning research.
— hardmaru (@hardmaru) October 2, 2019
A big benefit of finishing projects is that finishing them often tells you with clarity what you should _really_ have been working on. But you can't get to that insight without finishing. Call it the sunk cost paradox 😀 https://t.co/WD7JgoXNwR
— michael_nielsen (@michael_nielsen) September 30, 2019
Apparently open science is, despite all appearances to the contrary, not helpful to diversity, advancing scientific progress, or improving scientific discourse.
— Jeremy Howard (@jeremyphoward) September 30, 2019
It turns out it's purely self-serving.
Nice to have to finally sorted. https://t.co/d7COrFh25T
I have theory that data teams have the largest span of productivity (like 10x-100x between best and worst companies) and most of that breaks down into what tools they use
— Erik Bernhardsson (@fulhack) September 27, 2019
The conventional wisdom is to train, deploy, and iterate on ML models in production as often as possible. In practice, this might be a bad idea if your model training & deployment is a manual, human intensive process. Invest in ML Ops automation before you commit to a fast pace.
— Peter Skomoroch (@peteskomoroch) September 24, 2019
The problem with metrics is a big problem for AI
— Rachel Thomas (@math_rachel) September 24, 2019
- Most AI approaches optimize metrics
- Any metric is just a proxy
- Metrics can, and will, be gamed
- Metrics overemphasize short-term concerns
- Online metrics are gathered in highly addictive environmentshttps://t.co/k0J5ksw91Q pic.twitter.com/yGLUV2T2u3
The best ideas & products were rarely good at first, they come from countless cycles of iterative refinement.
— François Chollet (@fchollet) September 24, 2019
A version of this is true for people: the more willing we are to reevaluate our views and admit mistakes, the faster we learn and the further we can improve. Iterate :)
NO. Privacy is already for the privileged. This will create a further divide and low income individuals sell their data to make ends meet. https://t.co/gymHINw80P
— Rumman Chowdhury (@ruchowdh) September 23, 2019
I think it's possible to create a social network where the interaction modalities are such that it won't immediately degenerate into extreme toxicity.
— François Chollet (@fchollet) September 23, 2019
Empathy is as much part of human nature as anger or jealousy. But public, anonymous reply buttons only encourage the latter.
The last ten years of data science have been about collecting as much data as possible. The next ten will be about collecting very little.
— Vicki Boykis (@vboykis) September 23, 2019
The big driving factors are:
1. Public/media sentiment and
2. CCPA
Lots more detail on why I think so, here: https://t.co/lHhBDK24yM
Those hoping to make positive social change have to convince people both that something needs changing & that there is a constructive, reasonable way to do it.
— Rachel Thomas (@math_rachel) September 23, 2019
Authoritarians merely have to confuse & weaken trust in general so that everyone is too fractured & paralyzed to act. pic.twitter.com/e69Fj52g6m