My adamance that “business logic belongs in ETL, not BI” is, fundamentally, the same as “create a metric layer.” And it’s like we’re all figuring out how to do that well as we go along. https://t.co/WUiYFUAwi1
— JD Long (@CMastication) May 14, 2022
My adamance that “business logic belongs in ETL, not BI” is, fundamentally, the same as “create a metric layer.” And it’s like we’re all figuring out how to do that well as we go along. https://t.co/WUiYFUAwi1
— JD Long (@CMastication) May 14, 2022
“Are there any software engineers that switched into a machine learning role and found it a lot more stressful due to deadlines combined with the uncertainty of research?”
— hardmaru (@hardmaru) May 11, 2022
Discussion: https://t.co/OHZdmj1ly2 pic.twitter.com/jV3v7QK8KB
I've finally put my finger on why "gradual typing" is often so difficult to implement in established Python packages. The issue is that it runs entirely counter to the "Easier to Ask for Forgiveness than Permission" (EAFP) coding style long advocated in the Python language.
— Jake VanderPlas (@jakevdp) April 25, 2022
Testing is a tool for development velocity and reliability that will spring into action upon changes made by people who aren't you anymore, in situations you cannot predict in advance. Write your tests with this in mind. Defensively, and with great clarity.
— François Chollet (@fchollet) February 28, 2022
another motivational AI tweet drop 💣
— Kyunghyun Cho (@kchonyc) February 15, 2022
prompt engineering is a symptom not a cure and must be treated not encouraged.
Every week I hear about some folks that look at GradCAM (apparently most of medical imaging research) or SHAP as though anyone knows what, if anything, these “explanations” mean and it’s terrifying. Overall, I believe it’s already in “actively harmful” territory.
— Zachary Lipton (@zacharylipton) February 2, 2022
A lot of machine learning research has detached itself from solving real problems, and created their own "benchmark-islands".
— Christoph Molnar (@ChristophMolnar) January 24, 2022
How does this happen? And why are researchers not escaping this pattern?
A thread 🧵 pic.twitter.com/uggKd7RsJf
The more we understand the theory of deep learning, all the cushy ML engineering jobs will become mundane data processing jobs (some might argue it already is), and it will lower the competence/training needed to fill those jobs.
— Delip Rao (@deliprao) January 23, 2022
The ongoing consolidation in AI is incredible. Thread: ➡️ When I started ~decade ago vision, speech, natural language, reinforcement learning, etc. were completely separate; You couldn't read papers across areas - the approaches were completely different, often not even ML based.
— Andrej Karpathy (@karpathy) December 8, 2021
Machine learning is the science of figuring out how to organize information. Information cartography, if you will
— François Chollet (@fchollet) November 7, 2021
The law of working on machine learning projects:
— Radek Osmulski (@radekosmulski) November 1, 2021
✅ you are unable to tell if a problem can be solved until you build a baseline
✅ any time estimates you make before building a baseline are fortune-telling
How to share your progress with your mentors/collaborators?
— Jia-Bin Huang (@jbhuang0604) October 27, 2021
Throughout your research project, 99% of the time your approach DOESN'T WORK (yet). 😬
How could we share these "failed results" and have productive conversations with your mentors/collaborators? 👇