Why is the AI Hype Absolutely Bonkers https://t.co/EambFHUP5C
— /MachineLearning (@slashML) March 23, 2020
Why is the AI Hype Absolutely Bonkers https://t.co/EambFHUP5C
— /MachineLearning (@slashML) March 23, 2020
I'm still a fan of preprints, but I have to admit "The President could retweet a really shaky, non-randomized preprint" is a fairly solid argument pic.twitter.com/DRDhWDV4RW
— David Robinson (@drob) March 22, 2020
All my data lovers:
— Brandon Rohrer (@_brohrer_) March 15, 2020
Playing with a high consequence COVID-19 data set is a thrill.
When you turn it into a plot you tell a story. Unless you’re an epidemiologist you probably won’t get the story right.
What data means matters.
Lives are on the line. Be cautious.
h/t @matplotlib https://t.co/vw7gVD13H8
Perspective comes from zooming out. Insight comes from zooming in. They’re both enhanced by zoning out for a while.
— Jason Fried (@jasonfried) March 4, 2020
DL is applicable when you're doing *pattern recognition*: when you have data that lies on a smooth manifold, along which samples can be interpolated. And you're going to need a dense sampling of your manifold as training data in order to fit a parametric approximation of it
— François Chollet (@fchollet) March 3, 2020
Using deep learning as a universal & magical hammer you reach out to every time is actually the opposite of being creative
— François Chollet (@fchollet) March 3, 2020
Article about @a16z's take on the many “hidden costs” of deploying current deep learning techniques in real world.
— hardmaru (@hardmaru) February 25, 2020
Discussion points include huge cloud compute bills, model iteration, training costs (humans), costs of handling data and humans in the loop. https://t.co/I3VosNWgdo
It's not the framework. It's not the hardware. Whoever owns the data pipeline will own the production pipeline for machine learning.
— Chip Huyen (@chipro) February 25, 2020
The growing popularity of JAX could be a disaster for @NvidiaAI unless they fix the poor support for GPUs in XLA.
— Jeremy Howard (@jeremyphoward) February 23, 2020
Currently JAX only runs well on Google's TPUs. https://t.co/NQHxqVAGR0
I prefer something more like this: https://t.co/TLUcVsGfN2 https://t.co/Sn9IFmAS7g
— hardmaru (@hardmaru) February 23, 2020
Reminds me of this comment: pic.twitter.com/z2AqWD4l33
— hardmaru (@hardmaru) February 23, 2020
A key design principle I follow in libraries (e.g. Keras) is "progressive disclosure of complexity". Make it easy to get started, yet make it possible to handle arbitrarily flexible use cases, only requiring incremental learning at each step.
— François Chollet (@fchollet) February 22, 2020
Like zooming in a complex landscape. pic.twitter.com/AzaySJeTMP