Nice bit of academic sociology in the BERT slides. pic.twitter.com/4wmoibM02t
— Sam Bowman (@sleepinyourhat) January 28, 2019
Nice bit of academic sociology in the BERT slides. pic.twitter.com/4wmoibM02t
— Sam Bowman (@sleepinyourhat) January 28, 2019
"A BERT Baseline for the Natural Questions," Alberti et al.: https://t.co/FGU9b9xpwf
— Miles Brundage (@Miles_Brundage) January 28, 2019
"For me, as a Starcraft fan, I would appreciate the DeepMind team being a bit more open about the challenges they are facing as well as the corners they are cutting and how they plan to tackle them in the next demo" Insanely thorough analysis of AlphaStar:https://t.co/UaUA6Lgok7
— Zachary Lipton (@zacharylipton) January 27, 2019
“The way how Deepmind beat the human pros was in direct contradiction to what their mission statement was and what they repeatedly claimed to be the “right way” to do it.” 🔥🔥🔥 https://t.co/xqKn8w01A8
— Denny Britz (@dennybritz) January 27, 2019
Interesting post on AlphaStar's APM and superhuman actions. pic.twitter.com/KrzSZf73en
Pytorch implementation of RA-retrofit https://t.co/iwaBgQs4tZ #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) January 27, 2019
This flew under my radar! Interesting paper by @Stanislavfort & @ascherlis https://t.co/FaYaL8Agwk extending and explaining the findings of @ChunyuanLi @mimosavvy and @jasonyo on the "intrinsic dimension" of loss hypersurfaces of neural networks https://t.co/8C0six3IN5
— andrea panizza (@unsorsodicorda) January 25, 2019
Why doesn’t RL use more toy tasks to measure advances in specific aspects of a problem like long term planning, large action spaces, imperfect information, etc?
— Denny Britz (@dennybritz) January 25, 2019
Complex environments such as Starcraft are impressive but make it difficult to disentangle *why* an agent wins.
This video shows @LiquidTLO playing more like a machine than a human 🤖 https://t.co/NtpAK7E78y
— hardmaru (@hardmaru) January 25, 2019
Yes very much so. Well spotted. There are "Group Equivariant Convolutional Networks" and other related papers in this area, although there's still much work to be done. https://t.co/aTQkwZzD4m https://t.co/ZmXDFjMU3Z
— Jeremy Howard (@jeremyphoward) January 24, 2019
Only a small fraction of #Twitter users engaged with #FakeNews sources during the presidential election of 2016. New research of registered voters on Twitter reveals that those who did had something in common. Read the research: https://t.co/cO355RFYmY pic.twitter.com/3AO4Z55NfJ
— Science Magazine (@sciencemagazine) January 24, 2019
"How do Mixture Density RNNs Predict the Future?," Ellefsen et al.: https://t.co/1FAguEZuRo
— Miles Brundage (@Miles_Brundage) January 24, 2019
"Dynamic Transfer Learning for Named Entity Recognition" - with direct healthcare application by Amazon folks https://t.co/5L2d6Op4GT - interesting proposal to use Dynamic Transfer Network for architecture search
— Xavier🎗🤖🏃 (@xamat) January 24, 2019