I used to recommend nsmlib to everyone for this, but this performance of ScanNN looks amazing. Pre-built wheels available here: https://t.co/Gh48b7mLxT https://t.co/wCTEADKdIo
— Delip Rao (@deliprao) July 29, 2020
I used to recommend nsmlib to everyone for this, but this performance of ScanNN looks amazing. Pre-built wheels available here: https://t.co/Gh48b7mLxT https://t.co/wCTEADKdIo
— Delip Rao (@deliprao) July 29, 2020
Big Bird: Transformers for Longer Sequences https://t.co/vUZJj7SLTK
— /MachineLearning (@slashML) July 29, 2020
Big Bird: Transformers for Longer Sequences 🐦
— AK (@ak92501) July 29, 2020
pdf: https://t.co/1ZH5oC2T2e
abs: https://t.co/DLt59rpbps pic.twitter.com/XHuvaqPahM
A Keras implementation of “A Simple Framework for Contrastive Learning of Visual Representations” (SimCLR)https://t.co/RUsFe4CQNLhttps://t.co/d87F44gLpn pic.twitter.com/AjfBl8cwxF
— hardmaru (@hardmaru) July 28, 2020
💡💡What is the best acc an MLP can get on CIFAR10❓
— Behnam Neyshabur (@bneyshabur) July 28, 2020
65%❓ No, 85%‼️
Trying to understand convolutions, we look at MDL and come up with a variant of LASSO that when applied to MLPs, it learns local connections and achieves amazing accuracy!
Paper: https://t.co/PUb2Q4tIBT
1/n pic.twitter.com/ijrX9CFJ41
The cleverness tax is higher for scholars whose work doesn’t fit their discipline’s stereotyped notions of what clever work is supposed to look like. You’re often forced to pick between having a real impact on the world and just staying in the game.
— Arvind Narayanan (@random_walker) July 27, 2020
DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation
— hardmaru (@hardmaru) July 23, 2020
Exciting work from @alxandrecarlier et al. Transformer-based hierarchical generative models learn latent representations of vector graphics, with nice applications in SVG animation.https://t.co/2FBICYu6NM https://t.co/gLNMaTLAYP pic.twitter.com/ooD0rYvVl3
Over time, I have come around and see the wisdom of his view. PCs are good at rejected the bottom 30-50% of papers that everyone agrees is not ready. Why get so hung up on the remaining 50%? We should just accept them all, and free up a ton of time for reviewers.
— Vijay Chidambaram (@vj_chidambaram) July 19, 2020
I'm really excited about this project. I think adapters are a useful framework to efficiently leverage task, domain, and language-specific information—and AdapterHub makes it easy to download, train, and share them. https://t.co/06yaXd7g8P
— Sebastian Ruder (@seb_ruder) July 16, 2020
#ICML2020: We've developed a technique to mark the images in a data set so researchers can determine whether a particular #ML model has been trained using those images. Learn more about “radioactive data” here:https://t.co/u8eQH8R2jD Catch our talk at 3PM EST today.
— Facebook AI (@facebookai) July 15, 2020
Closed-Form Factorization of Latent Semantics in GANs
— AK (@ak92501) July 15, 2020
pdf: https://t.co/pYda24esEJ
abs: https://t.co/TTx4gwm5Xl pic.twitter.com/7YuVWnuquW
Simply augmenting the data often yields bigger perf gains than tweaking the model.
— Eric Jang 🇺🇸🇹🇼 (@ericjang11) July 14, 2020
We formalize "meta-augmentation" and show that you can apply it to pretty much any meta-learning problem and any meta-learner.https://t.co/uQLvzlS6tX
with Janarthanan Rajendran, @AlexIrpan pic.twitter.com/2qIVNlhVAw