Benchmarking Metric Learning Algorithms the Right Way https://t.co/HklbRW2Gsu
— /MachineLearning (@slashML) November 26, 2019
Benchmarking Metric Learning Algorithms the Right Way https://t.co/HklbRW2Gsu
— /MachineLearning (@slashML) November 26, 2019
We also introduce a technique [https://t.co/SFz2vTThTv] for training neural networks that are sparse throughout training from a random initialization - no luck required, all initialization “tickets” are winners. pic.twitter.com/fA7VmXrj20
— DeepMind (@DeepMindAI) November 26, 2019
“Fast Sparse ConvNets”, a collaboration w/ @GoogleAI [https://t.co/TPD6mI9MA6], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents. pic.twitter.com/poDKMzfA4u
— DeepMind (@DeepMindAI) November 26, 2019
Can #NLProc reduce bias in our news and politics?
— Stanford NLP Group (@stanfordnlp) November 25, 2019
Automatically Neutralizing Subjective Bias in Text
Pryzant, @jurafsky, … #AAAI2020
Parallel corpus of 180k biased and neutralized sentences
Models for editing subjective bias out of text #Wikipedia #npovhttps://t.co/hvx5ZQUTOz pic.twitter.com/6U6CdlwqVz
A list of papers on BERT compression (hat tip to @tianchezhao) https://t.co/WgQjjQPubo
— Leonid Boytsov (@srchvrs) November 24, 2019
DeepFovea: when you know where someone looks, you can transmit high-resolution video information at that location and lower-res in the periphery, using a recurrent U-net GAN to fill in the blanks.
— Yann LeCun (@ylecun) November 23, 2019
From Facebook... https://t.co/6G1HZqzb7V
We’re sharing a new benchmark called MLQA to help extend performance improvements in extractive question-answering (QA) to more languages. It contains thousands of QA instances in Arabic, German, Hindi, Spanish, Vietnamese, and Simplified Chinese. https://t.co/qGSfOc30Co pic.twitter.com/XIITMxpWt8
— Facebook AI (@facebookai) November 23, 2019
How can we learn a sequence of tasks without forgetting, without class labels and with unknown or ambiguous task boundaries?
— DeepMind (@DeepMindAI) November 22, 2019
Continual Unsupervised Representation Learning:
Paper: https://t.co/tOZXROULoi
Code: https://t.co/SpKEJLZHwK pic.twitter.com/3WSzWmILlB
Deep learning achieved great success in modeling sensory processing. However, such models raise questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (biological circuit) with another (a deep net), without understanding either? https://t.co/V9f6CTkwYN pic.twitter.com/9DK4BpImr8
— hardmaru (@hardmaru) November 22, 2019
EfficientDet: Scalable and Efficient Object Detection. A new family of object detectors that achieves an order-of-magnitude better efficiency than prior art. +Large scale experiments from Google Brainhttps://t.co/cbpfjsTusX #computervision #Robotics pic.twitter.com/9mflLs3q0b
— Tomasz Malisiewicz (@quantombone) November 21, 2019
Not a comment about this paper but, in computer vision, there is a genre of papers — “doing Y achieves X% top-1 on ImageNet”. In practice, many of those Ys don’t apply for the computer vision problem _you_ are trying to solve. https://t.co/VaRUpbvEXq
— Delip Rao (@deliprao) November 20, 2019
Self-training with Noisy Student: A semi-supervised approach by Google/CMU that outperforms Facebook's "weakly labeled 3.5B Instagram" method on ImageNet. https://t.co/vURETxQJJo pic.twitter.com/8cvlbn5yUP
— Eric Jang 🇺🇸🇹🇼 (@ericjang11) November 19, 2019