A list of papers on BERT compression (hat tip to @tianchezhao) https://t.co/WgQjjQPubo
— Leonid Boytsov (@srchvrs) November 24, 2019
A list of papers on BERT compression (hat tip to @tianchezhao) https://t.co/WgQjjQPubo
— Leonid Boytsov (@srchvrs) November 24, 2019
Have you, like me, had a vague sense that work was happening to make model uncertainty more calibrated, but not known in more detail where the field was? This paper - which does broad testing of different uncertainty methods - is for you! #mlwritingmonth https://t.co/dNagcwvCpb
— Cody Wild (@decodyng) November 14, 2019
New blog post: Unsupervised cross-lingual representation learning
— Sebastian Ruder (@seb_ruder) November 6, 2019
An overview of learning cross-lingual representations without supervision, from the word level to deep multilingual models. Based on our ACL 2019 tutorial.https://t.co/z0kktVNu8m pic.twitter.com/HcCcB5sDEb
In our newest paper we discuss the frontier of simulation-based inference (aka likelihood-free inference) for a broad audience. We identify three main forces driving the frontier including: #ML, active learning, and integration of autodiff and probprog. https://t.co/R6vMUAnaul pic.twitter.com/ZOmCWcNSCl
— Kyle Cranmer (@KyleCranmer) November 6, 2019
Understanding UMAP - an interactive introduction to the algorithm and how to us (and mis-use) it from @_coenen and @adamrpearce . A must read for anyone interested in dimension reduction.https://t.co/yzUoNQrnbR
— Leland McInnes (@leland_mcinnes) November 5, 2019
Computing Receptive Fields of Convolutional Neural Networks -- A new Distill article by André Araujo, Wade Norris, and Jack Sim.https://t.co/XqvDBlNBoM
— distillpub (@distillpub) November 4, 2019
Also, a review of the eye-opening Taxonomy of Real Faults in Deep Learning Systems.
— Brandon Rohrer (@_brohrer_) November 4, 2019
Issues with training data quality and pre-processing loom large.https://t.co/GUuqUtaank pic.twitter.com/M9ya1Q3aid
Must-read paper for folks working on experimentation. Great intuition, interesting results, and practical advice that you probably haven’t heard already. https://t.co/RPwyyDnu7p
— Sean J. Taylor (@seanjtaylor) November 2, 2019
This is a wonderful paper that should be read by every ML researcher concerned with explainability. I dont know how it escape my attention. I would shorten it more, skip the "according to so and so.." and present ML folks with a computer-minded taxonomy of explanation #Bookofwhy https://t.co/UGvL1YYV0W
— Judea Pearl (@yudapearl) October 30, 2019
Introduction to Adversarial Machine Learning
— hardmaru 😷 (@hardmaru) October 29, 2019
A tutorial by @amarunava that presents an overview of current approaches for adversarial attacks and defenses in the literature.https://t.co/8cq3O6jRrG pic.twitter.com/Ek5gfxkyh0
Fantastic piece by @math_rachel on the misuse of metrics in organizations, particularly ones that measure success by number of clicks and time spent on platform. https://t.co/H4gDs2scD1
— Vicki Boykis (@vboykis) October 19, 2019
Ever wondered how to deal with confounds and covariates in experimental design? A new blogpost explaining these concepts and collecting some advice: https://t.co/Dgh1rq8yqF
— Michael C. Frank (@mcxfrank) October 9, 2019
Also featuring causal graphical models and Bob Dylan! pic.twitter.com/SwbFkCxV5d