Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
β Yann LeCun (@ylecun) December 27, 2018
90 languages, all handled by a single encoder.
New SOTA on XNLI.
From FAIR-Paris. https://t.co/OcJkoFzdyM
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
β Yann LeCun (@ylecun) December 27, 2018
90 languages, all handled by a single encoder.
New SOTA on XNLI.
From FAIR-Paris. https://t.co/OcJkoFzdyM
Creating super slow motion videos by predicting missing frames using a neural network, instead of simple interpolation. With code.
β Reza Zadeh (@Reza_Zadeh) December 27, 2018
Code: https://t.co/FGm24v8bhp
Project: https://t.co/LKsMqmykNA pic.twitter.com/HrL8zM4cXa
My @quora answer to What were the most significant machine learning/AI advances in 2018? https://t.co/3O9v2dyk4E
β Xavierππ€π (@xamat) December 23, 2018
Congrats to Eric Zhan for his #ICLR2019 paper on sequence generation w/ labeling functions. We learn interesting multi-modal trajectory distributions from real NBA data.
β Yisong Yue (@yisongyue) December 21, 2018
Demo: https://t.co/Js1lio4ekD
Paper: https://t.co/DUKrMtGsMt
w/ @StephanZheng @patricklucey @StatsBySTATS pic.twitter.com/JF0H3JiFMQ
The author of this paper-classifier ran their model on their paper describing this paper-classifier and it suggested a strong reject. π€£
β hardmaru (@hardmaru) December 21, 2018
We can probably train a better model using data from @karpathyβs arxiv-sanity.https://t.co/LgbMxWAXZj pic.twitter.com/4WJG7tP4jt
We're releasing tutorials on our work using CCA to compare and probe representations in deep neural networks: https://t.co/DgXmb6XIIj There are Jupyter notebooks overviewing the technique, descriptions of results, and discussions of open problems. We hope this is useful resource!
β Maithra Raghu (@maithra_raghu) December 20, 2018
10 Exciting Ideas of 2018 in NLP: A collection of 10 ideas that I found exciting and impactful this yearβand that we'll likely see more of in the future.https://t.co/iv29bxYbq4 pic.twitter.com/wqCE2Duf6M
β Sebastian Ruder (@seb_ruder) December 19, 2018
TextBugger: Generating Adversarial Text Against Real-world Applications #NDSS2019
β ML Review (@ml_review) December 19, 2018
1. Outperforms SoTA in attack success rate
2. Preserves the utility of benign text, ~95% of text human comprehendible
3. Comp. complexity sub-linear to the text lengthhttps://t.co/ihiR3g32wk pic.twitter.com/xPnzXBKRp7
Really cool design of a spiking NN that appears to actually outperform LSTM and GRU!
β Thomas Lahore (@evolvingstuff) December 19, 2018
Deep Networks Incorporating Spiking Neural Dynamicshttps://t.co/VeJBnkgPcB pic.twitter.com/1UtsNMCbYc
Our favorite research, talks, art and other stuff from 2018 https://t.co/HFJX0DAqQl
β Fast Forward Labs (@FastForwardLabs) December 18, 2018
Very cool paper! https://t.co/wqs66aeqmy
β Miles Brundage (@Miles_Brundage) December 14, 2018
this repo lets you implant random things into paintings https://t.co/yT1LX4jYNL pic.twitter.com/VAiVu2Tja7
β Gene Kogan (@genekogan) December 14, 2018