A nice summary of #NeurIPS2019: https://t.co/yiUNh4xroq. Interestingly,
— Adam Kosiorek (@arkosiorek) December 16, 2019
it's almost completely orthogonal to my experience...
A nice summary of #NeurIPS2019: https://t.co/yiUNh4xroq. Interestingly,
— Adam Kosiorek (@arkosiorek) December 16, 2019
it's almost completely orthogonal to my experience...
PyTorch implementation of Neural Painters
— hardmaru (@hardmaru) December 14, 2019
Cool talk by @reiinakano at ML for creativity workshop. He released a new @PyTorch library and notebooks for others to easily play around with neural painter algorithms!https://t.co/XLgxSbndDshttps://t.co/hszDYqOJxt pic.twitter.com/Jcn5dG8PeF
Mixtape: breaking the softmax bottleneck that limits expressiveness of neural language models.
— Russ Salakhutdinov (@rsalakhu) December 12, 2019
A network with Mixtape Output Layer is only 35% slower than softmax-based network, while outperforming softmax in perplexity & translation quality #NeurIPS2019https://t.co/ZxpIqtJomX
Training data is often collected through a biased process. Models trained on such data are inherently biased. We demonstrate how adversarial training through disentangled representations can reduce the effect of spurious correlations present in datasets: https://t.co/wgdJwChIJV pic.twitter.com/YjJpnY1E8k
— DeepMind (@DeepMindAI) December 12, 2019
Lessons from NMT on 103 languages:
— Rachel Thomas (@math_rachel) December 11, 2019
- Encoder reps cluster based on linguistic similarity
- Reps of source lang learned by the encoder depend on target language & viceversa
- Reps of high resource or similar langs are more robust when fine-tuning #WiML2019https://t.co/gwfakIUE2k pic.twitter.com/lLBb2cFyhj
Taking derivatives is usually uncreative & tedious. Software can do it automatically.
— Reza Zadeh (@Reza_Zadeh) December 11, 2019
OTOH, in reverse, Integrals are *much* harder, need creativity, & challenge symbolic computation.
Surprisingly, a simple Transformer can learn to integrate really well. https://t.co/199avwICgO pic.twitter.com/aiVG4YyI6N
Very cyberpunk: "We instead ... perform pixel-level image translation via CycleGAN to convert the [video of a human demonstrating a task] into a video of a robot, which can then be used to construct a reward function for a model-based RL algorithm." https://t.co/NcoMeFeNkZ
— Miles Brundage (@Miles_Brundage) December 11, 2019
The next Kaggle reading group paper is "On NMT Search Errors and Model Errors: Cat Got Your Tongue?" by Stahlber & Byrne @emnlp2019. (@mgalle tweeted about it a while ago and it's been on my list ever since.)
— Rachael Tatman (@rctatman) December 11, 2019
Come join in tomorrow @ 9:00 AM Pacific! 😊🔖https://t.co/jYHYcXyhd5
Facebook's algorithms skew the delivery of political ads towards partisan audiences by inferring users' political affiliations and automatically analyzing ad content. New paper by researchers at Northeastern, USC, and Upturn: https://t.co/Fl3GYdLtVT pic.twitter.com/ljkpw4YDgX
— Arvind Narayanan (@random_walker) December 10, 2019
This week's excitement and adventure in Machine Learning: #NeurIPS2019! 🎉 Talks & slides are live and being posted online https://t.co/SSpuVWe2NQ
— Andrej Karpathy (@karpathy) December 9, 2019
Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detectionhttps://t.co/MvRu3dqTxk pic.twitter.com/cciL5Db73x
— Aäron van den Oord (@avdnoord) December 9, 2019
Google, Intel, MIT, and more: a NeurIPS conference AI research tour #NeurIPS2019 https://t.co/9zUaFgSeib
— Nando de Freitas (@NandoDF) December 7, 2019