Someone wrote a really nice summary of this paper in a comment: https://t.co/CLVCdL5i8h
— hardmaru (@hardmaru) July 12, 2019
Someone wrote a really nice summary of this paper in a comment: https://t.co/CLVCdL5i8h
— hardmaru (@hardmaru) July 12, 2019
When the outcome is binary: logistic or linear?
— Robin Gomila (@RobinGomila) July 11, 2019
Preprint is ready, feedback much welcome!!https://t.co/2BmaytDrY5 pic.twitter.com/ETEzSQN7Qy
Recent work on "Unsupervised Data Augmentation" (UDA) reveals that better data augmentation leads to better semi-supervised learning, with state-of-the-art results on various language and vision benchmarks, using one or two orders of magnitude less data. https://t.co/CQmWbFelh2
— Google AI (@GoogleAI) July 11, 2019
Our newest policy research analyzes the game theory of industry cooperation on AI safety. We found 4 strategies the AI community can use today to enable long-term cooperation: https://t.co/AxMBQwORec pic.twitter.com/VO8NDVeJKJ
— OpenAI (@OpenAI) July 10, 2019
🤣 That x-axis label, tho…
— Mara Averick (@dataandme) July 10, 2019
📊 "Ease of collaboration (including future self)"https://t.co/D8atQLi4jN
📑 Full paper: https://t.co/LDBsmV6Wyf @juliesquid et al via @OHIScience pic.twitter.com/5KASkuifsh
A better understanding of how deep neural networks (DNNs) generalize to unseen data can lead to improved model design and streamlined development. A new technique uses margin distributions to better predict a DNN's “generalization gap”. Learn more here! https://t.co/Y98cQylOIo
— Google AI (@GoogleAI) July 9, 2019
It’s interesting to see the pendulum swing back to representation learning. During my PhD, most of my collaborators and I were primarily interested in representation learning as a biproduct of sample generation, not sample generation itself. https://t.co/i0aRhgN3LL
— Ian Goodfellow (@goodfellow_ian) July 8, 2019
In our new paper https://t.co/Rhm94rOuX5 we show that GANs can be harnessed for unsupervised representation learning, with state-of-the-art results on ImageNet. Reconstructions, as shown below, tend to emphasise high-level semantics over pixel-level details. pic.twitter.com/9gSKL6aoQu
— DeepMind (@DeepMindAI) July 8, 2019
So bad not to see cited in all these new papers the very first gradient-based attack against ML (including SVMs and NNs), which we published in ECML 2013.https://t.co/0uJVFYjeRM
— Battista Biggio (@biggiobattista) July 7, 2019
If you want to dig deeper into advML, you can find the whole story here: https://t.co/sOrRBeWPYG https://t.co/gJicWDYsy3
Conditional Density Estimation with Neural Networks: Best Practices and Benchmarks
— hardmaru (@hardmaru) July 6, 2019
This looks like a pretty useful toolkit for using neural networks for density estimation approaches commonly used in finance and econometrics.https://t.co/l8hGXY6yuBhttps://t.co/9dkilZza3Z https://t.co/2oKCxKkMd2
Neural Machine Reading Comprehension: Methods and Trendshttps://t.co/C7gRor6TR0
— Thomas Lahore (@evolvingstuff) July 3, 2019
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness https://t.co/ioRF2NpDvC
— /MachineLearning (@slashML) July 2, 2019