I.... worked on a stats masters degree for a year... and... he just... he tweeted it out. https://t.co/UqHAmt1kGp
— Brooke Watson Madubuonwu (@brookLYNevery1) December 30, 2019
I.... worked on a stats masters degree for a year... and... he just... he tweeted it out. https://t.co/UqHAmt1kGp
— Brooke Watson Madubuonwu (@brookLYNevery1) December 30, 2019
“To start a PhD in ML, without insider referral, you need to do work equivalent to half of a PhD.
— hardmaru (@hardmaru) December 30, 2019
Hence, in Apr 2019, I decided to dedicate all my time until Jan 2020 to publish in either NeurIPS or ICLR.
If I fail, I would become a JavaScript programmer.”
— @andreas_madsen ‼️ https://t.co/1KkFizexWk
Some people in ML share the illusion that models expressed symbolically will necessarily/magically generalise better compared to, for example, parametric model families fit on the same data. This belief seems to come from a naive understanding of mathematics 1/5
— Danilo J. Rezende (@DeepSpiker) December 29, 2019
My favorite part here is the distinction between “average” and “not bad,” which have the same mode but convey drastically different information about how much certainty you have about it. https://t.co/GknKDDd8fr
— Sean J. Taylor (@seanjtaylor) December 29, 2019
Science is science writing; science writing is science https://t.co/BQobgNPrZ5
— Andrew Gelman (@StatModeling) December 29, 2019
A new paper has been making the rounds with the intriguing claim that YouTube has a *de-radicalizing* influence. https://t.co/TTtWR0uBgi
— Arvind Narayanan (@random_walker) December 29, 2019
Having read the paper, I wanted to call it wrong, but that would give the paper too much credit, because it is not even wrong. Let me explain.
AutoML + GAN = AutoGAN! AI Can Now Design Better GAN Models Than Humans https://t.co/nd5V4BD7rK via @Synced_Global
— Bojan Tunguz (@tunguz) December 28, 2019
Most read of 2019: Pricing algorithms learned to collude with each other… without being instructed to do so. https://t.co/Og37R7hwBd
— MIT Technology Review (@techreview) December 28, 2019
Tech platform companies subvert democratic processes in many ways. Here’s an insidious one that hasn’t gotten much attention: using the power and reach of the platform to misinform users or workers into campaigning to block proposed regulation, often against their self interest.
— Arvind Narayanan (@random_walker) December 27, 2019
Here's my capsule review of Cloud Run / Cloud SQL / Cloud Build, from a Heroku stan:
— jacobian (@jacobian) December 26, 2019
👍🏻 native Docker, no weirdness
👍🏻 don't pay for idle time!
👍🏻 cool access control, traffic options
👎🏻 the build/release/deploy pipeline is complex af
👎🏻 docs are terrible
👎🏻 cli ux is worse
Good summary of ML/DL/AI in 2019: Farewell to a landmark year; lang. models get literate; face rec. meets resistance; driverless cars stall; deepfakes go mainstream; simulation subst. for data; the rule-based (symbolist) vs neurons (connectionist) debate https://t.co/wESoYJxTKl
— Sebastian Raschka (@rasbt) December 26, 2019
In 1984, a panel at the AAAI conference discussed whether the field was approaching an "AI Winter" and what could be done about it. It's uncanny how much it reads like discussions being had in 2019.
— Eric Jang 🇺🇸🇹🇼 (@ericjang11) December 26, 2019
My favorite quotes: https://t.co/Ag6Uali14o