Here’s a cool instance of the Banach fixed-point theorem pic.twitter.com/jRaZhyiNcM
— Fermat's Library (@fermatslibrary) February 13, 2019
Here’s a cool instance of the Banach fixed-point theorem pic.twitter.com/jRaZhyiNcM
— Fermat's Library (@fermatslibrary) February 13, 2019
Basically, you have two kinds of problems: yourself, and the competition. The only problem you need to care about is the first one. If you solve it, the competition is nothing to worry about.
— François Chollet (@fchollet) February 13, 2019
If you don't, game over.
I wrote a piece about the multiple notions of "version" when building data-driven applications and the challenges of keeping it all straight when working with continuous delivery principles.https://t.co/nnTaLMvrKb
— Emily G, radical and feminist (@EmilyGorcenski) February 12, 2019
Well, Facebook uses ResNet-like ConvNets for all its image recognition (2 to 3 billion photos per day for the Blue Site alone, processed by a handful of ConvNets). The ConvNets are pretrained on billions of Instagram images to predict hashtags, then fine-tuned.
— Yann LeCun (@ylecun) February 12, 2019
Data gathered from individuals in Southeast Asia where the #hepatitis B virus is endemic allowed @astar_research scientists to construct a timeline of viral HBV infection status. In @SciImmunology: https://t.co/mw680y8OQI #HepB pic.twitter.com/2A87TCDzD3
— Science Magazine (@sciencemagazine) February 11, 2019
Michael Jordan: "But this is not the classical case of the public not understanding the scientists - here the scientists are often as befuddled as the public." Indeed. Realistic assessment of the state of AI/DL today: https://t.co/DzZ8Ql3ILU @GaryMarcus
— Max Little (@MaxALittle) February 11, 2019
After a few years of deep learning, “unconventional” bag-of-features techniques are back: https://t.co/BHACpooEme
— hardmaru (@hardmaru) February 11, 2019
So nets are stubbornly, begrudgingly, moving in the right direction and we're throwing ever larger amounts of compute and data at them and praying it's enough for them to figure out how to do things "the right way".
— Alec Radford (@AlecRad) February 11, 2019
Will that work?
Don't know. Probably still worth checking?
We *are* as a field developing and training models that *are* using more context but where exactly where we are on that trend-line is a great question.
— Alec Radford (@AlecRad) February 11, 2019
Keep in mind nets are lazy and if you can "solve" a task by doing something "basic" you'll only learn "basic" things.
A good summary of criticisms of Deep Learning for Vision (https://t.co/TCcC6cxDCq). IMO one overarching issue is that research is done to beat benchmarks and publish papers, rarely “regularized” by real-world problems with data characteristics that may be significantly different.
— Denny Britz (@dennybritz) February 11, 2019
The DL CV community is having a "oh wait, bags of local features are a really strong baseline for classification" moment with the BagNet paper.
— Alec Radford (@AlecRad) February 11, 2019
This has always been clear for text classification due to n-gram baselines. It took an embarrassingly long time for nets to beat them.
> almost there" IS often good enough
— Delip Rao (@deliprao) February 11, 2019
That’s not really true for hard problems like dialog. Right now all we got as a community is a bunch of fancy papers that no one uses bcos it doesn’t check out in practice, & a bunch of hacks taped together in prod with a lot of PR around it.