Maturity as a programmer is being happy the problem was something dumb rather than being something hard.
— Hilary Mason (@hmason) March 3, 2021
Maturity as a programmer is being happy the problem was something dumb rather than being something hard.
— Hilary Mason (@hmason) March 3, 2021
The secret to good writing is to publish it before you get bored of it. pic.twitter.com/zN4XTS8prm
— Chip Huyen (@chipro) March 3, 2021
ai: automatically aggregating the knowledge of the internet https://t.co/6PD0AQvQCp
— Janelle Shane (@JanelleCShane) February 28, 2021
Just stumbled upon this nice repo listing challenges and contests suitable for applying machine learning/deep learning. 33 challenges CVPR 2021-related challenges were just added. Something interesting to consider for student projects/homeworks https://t.co/KPviKe22Db
— Sebastian Raschka (@rasbt) February 27, 2021
This is beyond concerning - it's totally inappropriate. It structurally undermines the integrity of research. https://t.co/BYRepnqY5z
— Kate Crawford (@katecrawford) February 25, 2021
✅ feeling lost 98% of the time and not knowing if your approach makes sense
— Radek Osmulski (@radekosmulski) February 24, 2021
✅ identifying and trusting good advice among so much noise
✅ finding the time to study when being a parent, a student, an employee
✅ learning how to pick projects to work on for self-study
This is the AutoML debate all over again. No, you can't replace your data scientists with AutoML code -- what are you going to do when it doesn't work?
— Hilary Mason (@hmason) February 22, 2021
Same for prompt engineering vs ML engineering. If you're building these systems you need to understand them. https://t.co/nj1kQTetiw
For best results, fall in the love with the process, not the result
— François Chollet (@fchollet) February 22, 2021
8 reasons machine learning projects fail - by @elenasamuylova
— Alexey Grigorev (@Al_Grigor) February 21, 2021
🔸 Doing ML for wrong reasons
🔸 ML not needed
🔸 Bad data
🔸 Poor problem framing
🔸 Model ≠ product
🔸 Bad infrastructure
🔸 No trust from stakeholders
🔸 Production failures
Solution? 👉 https://t.co/mvs7sJyxDe pic.twitter.com/poTAzwWT4b
Samy Bengio (a director at Google AI w/ 300 reports, who spoke up for Timnit) is being pushed aside the same week Jeff Dean claims they will start tying exec performance to D&I
— Rachel Thomas (@math_rachel) February 19, 2021
In 2020, Samy hired 39% women, compared to just 14% for the rest of Jeff's org 🤔 https://t.co/ydb4D4P49t
Interesting analysis by @mhmazur. Human work is driven by clear goals and is informed by task-specific context. A model that is optimized for generating plausible-sounding text, ignoring goals and context, virtually never produces any useful answer (unless by random chance). https://t.co/QPzapZgale
— François Chollet (@fchollet) February 19, 2021
“I helped build ByteDance’s vast censorship machine: I wasn't proud of it, and neither were my coworkers. But that's life in today's China.” article by @shenlulushen, is worth a read if you are interested in ML ethics in production. A thread by the author:https://t.co/ivSaUwMj6L pic.twitter.com/h8MiMhtavc
— hardmaru (@hardmaru) February 19, 2021