I'm rather blown away by these beautiful SGD visualizations from inspiring @fastdotai participant @ideami - thank you for sharing! :)https://t.co/E6ctOuhA6i pic.twitter.com/ZcvGKiPdSC
— Jeremy Howard (@jeremyphoward) February 22, 2019
I'm rather blown away by these beautiful SGD visualizations from inspiring @fastdotai participant @ideami - thank you for sharing! :)https://t.co/E6ctOuhA6i pic.twitter.com/ZcvGKiPdSC
— Jeremy Howard (@jeremyphoward) February 22, 2019
New statistics term. "Box-hacking" is when people use "all models are wrong' as a way of ignoring how badly their procedures behave under model misspecification rather than focus on it
— Danielle Navarro (@djnavarro) February 21, 2019
Timely new paper from @geoffreyirving and @AmandaAskell of @OpenAI: "AI Safety Needs Social Scientists".
— Jeremy Howard (@jeremyphoward) February 19, 2019
(If you're a social scientist interested in this topic, talk to them, because they're hiring!)https://t.co/t53Q3HQqnq pic.twitter.com/gfczb0h0dS
Can “citizen scientists” do real science without costly laboratory equipment? Kaggle's Dr. Paul Mooney explores a low-cost approach to solving biological questions for the aspiring amateur scientist. [READ] https://t.co/CUyTBW3Ve4 via @TDataScience pic.twitter.com/iYg1N7j18d
— Kaggle (@kaggle) February 19, 2019
Facebook’s chief AI scientist @ylecun: Deep learning may need a new programming languagehttps://t.co/3DQbjgG2PN
— Jeremy Howard (@jeremyphoward) February 19, 2019
Thinking about this is a neat trick you can use when reading papers.
— Denny Britz (@dennybritz) February 19, 2019
1. Ask yourself: What conclusions have you drawn?
2. What are possible alternative narratives the author could’ve chosen?
3. If the author had chosen a different narrative, would your conclusion be different? https://t.co/kbWgNMXgbp
Reading Google's white paper on disinformation (https://t.co/aTBXlnQmDa), I'm reminded of a hypothesis we put forward last year re: the future of tech platforms amidst novel risks (attached). Arguably companies/platforms like Google are >prepared than others for stuff like GPT-2. pic.twitter.com/yDpkTE7dkw
— Miles Brundage (@Miles_Brundage) February 18, 2019
That’s both good and bad. It gives people a lot of freedom, but i feel like sometimes people are wasting their time on stuff that doesn’t matter - but they don’t realize it because they’ve never seen the real world
— Denny Britz (@dennybritz) February 18, 2019
Similar, but from faces (h/t @goodfellow_ian): https://t.co/jdgZDxAbss
— Oriol Vinyals (@OriolVinyalsML) February 17, 2019
***OpenAI Trains Language Model, Mass Hysteria Ensues*** New Post on Approximately Correct digesting the code release debate and media shitstorm.https://t.co/19Gbzxi6EW
— Zachary Lipton (@zacharylipton) February 17, 2019
“Machine-learning techniques used by scientists to analyse data are producing results that are misleading and often completely wrong, […] identifying patterns that exist only in that data set and not the real world. ” 🔥https://t.co/ZmnVHpqpjE
— Denny Britz (@dennybritz) February 17, 2019
Perhaps what's *most remarkable* about the @OpenAI controversy is how *unremarkable* the technology is. Despite their outsize attention & budget, the research itself is perfectly ordinary—right in the main branch of deep learning NLP research https://t.co/bmMkkL3KKJ
— Zachary Lipton (@zacharylipton) February 17, 2019