Setting the record straight on eggs
— Eric Topol (@EricTopol) March 24, 2019
—https://t.co/I0JMDKR8kp by @dailyzad via @EdwardTufte—https://t.co/YNjpReh7FK @PostOpinions by @jacques_pepin pic.twitter.com/poW3VrQGwj
Setting the record straight on eggs
— Eric Topol (@EricTopol) March 24, 2019
—https://t.co/I0JMDKR8kp by @dailyzad via @EdwardTufte—https://t.co/YNjpReh7FK @PostOpinions by @jacques_pepin pic.twitter.com/poW3VrQGwj
My PhD thesis Neural Transfer Learning for Natural Language Processing is now online. It includes a general review of transfer learning in NLP as well as new material that I hope will be useful to some.https://t.co/PxfVRYuyjx pic.twitter.com/XrYYYHx1ln
— Sebastian Ruder (@seb_ruder) March 23, 2019
Great work from Brain ZRH team on putting label-hungry GANs on a diet :)
— Karol Kurach (@karol_kurach) March 20, 2019
Want to train similar GANs easily? Check out the new release of our compare_gan library! https://t.co/iwnSqznb3J https://t.co/3Xl59GFc1A
A call to retire statistical significance: when > 800 scientists say enough is enough, "The tool has become the tyrant"https://t.co/FIKB3V5h6d
— Eric Topol (@EricTopol) March 20, 2019
by @vamrhein @Lester_Domes and a long list
with this week's @Nature editorial https://t.co/H4IUcAwmCM pic.twitter.com/RlarATcTao
“Retire Statistical Significance”: The discussion. https://t.co/0aaT4F0EG6
— Andrew Gelman (@StatModeling) March 20, 2019
This is really exciting work! A combination of multi GPU-accelerated scale, big data, and principled distributional estimation. What's not to love? https://t.co/44eY1R9JY5
— Eric Jang (@ericjang11) March 20, 2019
Blogpost on our new theory for word2vec-like representation learning methods for images, text, etc. Explains why representation do well on previously unseen classification tasks https://t.co/lv5vaJwTHJ Relevant to meta learning, transfer learning? Paper https://t.co/wkJxIOZRcO
— Sanjeev Arora (@prfsanjeevarora) March 20, 2019
IndyLSTMs: Independently Recurrent LSTMs
— Thomas Lahore (@evolvingstuff) March 20, 2019
'the recurrent weights are not modeled as a full matrix, but as a diagonal matrix... consistently outperform regular LSTMs both in terms of accuracy per parameter, and in best accuracy overall"https://t.co/rC8GJVeRjY
Nice paper by @hila_gonen and @yoavgo shows that popular embedding debiasing techniques focus myopically on a specific (linear) definition of bias but only "Put Lipstick on a Pig". Gender-association of charged word pairs remains / easy to detect w k-means https://t.co/aJIOlJQPbO
— Zachary Lipton (@zacharylipton) March 19, 2019
"Smart, Deep Copy-Paste," Portenier et al.: https://t.co/AVO5b01Oad pic.twitter.com/4IPqZm2R8F
— Miles Brundage (@Miles_Brundage) March 19, 2019
Looks like a great overview of the area: "Algorithms for Verifying Deep Neural Networks," Liu et al.: https://t.co/xjCb0ZBuUG
— Miles Brundage (@Miles_Brundage) March 19, 2019
Microsoft researchers have released MT-DNN—a Multi-Task Deep Neural Network model for learning universal language embeddings. See how language embeddings learned by MT-DNN are substantially more universal than those of BERT and explore the MT-DNN package: https://t.co/ZTDc76CntE
— Microsoft Research (@MSFTResearch) March 18, 2019