#CVPR2019 Oral Videos πΏπ₯https://t.co/OGBQmcop7C pic.twitter.com/WmW7Z7fRAX
β ML Review (@ml_review) June 24, 2019
#CVPR2019 Oral Videos πΏπ₯https://t.co/OGBQmcop7C pic.twitter.com/WmW7Z7fRAX
β ML Review (@ml_review) June 24, 2019
Deep Set Prediction Networks https://t.co/c63TOjvGEd interesting; we now have a lot of effective encoders for objects, sequences, sets, graphs etc., but decoders for sets are tricky. Imo this is holding back object detection, preventing end-to-end-ness and demanding nms (ew).
β Andrej Karpathy (@karpathy) June 24, 2019
1/3 This is a good study which shows the extent of diminishing returns in deep learning: https://t.co/LiU3ZFfkGE . The largest model has a whopping 829M parameters, gets 3% performance gain over a model with 10x less parameters, with flattening curve. pic.twitter.com/PlgQr1u8fr
β Filip Piekniewski (@filippie509) June 24, 2019
Introducing MASS β A pre-training method that outperforms BERT and GPT in sequence to sequence language generation tasks https://t.co/BzuZd7MKIL
β Bojan Tunguz (@tunguz) June 24, 2019
Exploring Model-based Planning with Policy Networks@TingwuWang and Jimmy Ba
β Brandon Amos (@brandondamos) June 24, 2019
Paper: https://t.co/hXOUZ7OCTG
Code: https://t.co/HUKfhV635h pic.twitter.com/Yis26OpXyx
New NLP NewsβBERT, GPT-2, XLNet, NAACL, ICML, arXiv, EurNLP https://t.co/4URmn0kd9e (via @revue)
β Sebastian Ruder (@seb_ruder) June 24, 2019
I remember that DeepDream has a tendency to hallucinate dog pictures as it used VGG trained on ImageNet, which contained many dogs. Wonder what DeepDream with pre-trained model on Instagram would look like. https://t.co/mqOG5gBUYw pic.twitter.com/pB2UyN65si
β hardmaru (@hardmaru) June 24, 2019
This is very nice work! (Though in full disclosure, we should note that @ethayarajh is turning up at Stanford in the Fall.) https://t.co/IvDSuzOa4x
β Stanford NLP Group (@stanfordnlp) June 23, 2019
resnext101_32x8d_wsl: the ConvNet pre-trained on Instagram hashtags and fine-tuned on ImageNet, yielding a record-breaking 85.4% top-1 accuracy is now available. https://t.co/5w04pugDcC
β Yann LeCun (@ylecun) June 23, 2019
New model from XLM outperforms BERT on all GLUE tasks, trained on same data. .
β Yann LeCun (@ylecun) June 21, 2019
Get it here: https://t.co/cYYOETEeaj
Tweets from Guillaume & Alex:... https://t.co/2ysUltBH7f
When your smart speaker (or smartphone) accurately detects agonal breathing and calls 911 about an impending heart attack (proof of concept) #AIhttps://t.co/Eb8BYo9syX@NPJ Digital Medicine more @UWCSE innovation by @realjustinchan @jesunshine (could also use for sleep apnea dx) pic.twitter.com/NEgX5y5Cv2
β Eric Topol (@EricTopol) June 19, 2019
ICML 2019 Tutorial: Recent Advances in Population-Based Search for Deep Neural Networks
β hardmaru (@hardmaru) June 19, 2019
Great talk that introduces novelty search, quality diversity, open-endedness, and indirect encoding. I can recommend watching the whole thing!https://t.co/t4R0Nqbjo5https://t.co/htzf3cBFAM pic.twitter.com/WAGjU0h6VP