Image Registration: From SIFT to Deep Learning. Useful to know. https://t.co/iL7V7SuQRl
— Nando de Freitas (@NandoDF) July 18, 2019
Image Registration: From SIFT to Deep Learning. Useful to know. https://t.co/iL7V7SuQRl
— Nando de Freitas (@NandoDF) July 18, 2019
Really nice comparison of different embedding methods, including contextual embeddings. It's from February, so it includes Bert but not GPT-2 or later models. https://t.co/S4QijmqfTy
— Rachael Tatman (@rctatman) July 10, 2019
(It's in Chinese but I found the Google translate to English in Chrome very readable. Yay NLP!)
I started seeing type hints in Python code I was working on about a year ago now.
— Vicki Boykis (@vboykis) July 9, 2019
My first thought: Nice!!
My second thought: Wait, what are type hints?
So I recently did a deep dive on what type hints bring to Python and whether we need to use them. https://t.co/cR1k1o0uVb
A Survey on Image Data Augmentation for Deep Learning [48pp]
— ML Review (@ml_review) July 9, 2019
By @CShorten30
Geometric transform, color space augment, kernel flters, mixing images, random erasing, feature space augment, adversarial training,
GANs, neural style transfer & meta-learning https://t.co/6DKAcbPASK pic.twitter.com/LWeT4okVSY
Neural Machine Reading Comprehension: Methods and Trendshttps://t.co/C7gRor6TR0
— Thomas Lahore (@evolvingstuff) July 3, 2019
This paper does a great job of explaining why we need to move past the separation of pre-written ops/kernels separated from end-user code. It's obstructing research.
— Jeremy Howard (@jeremyphoward) June 29, 2019
This is something Swift/MLIR is working to fix.https://t.co/hOILMvkSaV
🤓 Here it is, the 2019 State of AI Report by @soundboy and I. 130 slides covering the most important machine learning research, industry and political developments over the past 12 months. New section on China.
— Nathan Benaich (@NathanBenaich) June 28, 2019
👉 Please RT if you find interesting! https://t.co/tnTbh38SZS
Ever wondered how the Variational Autoencoder (VAE) model works? After reading this post, you’ll be equipped with the theoretical understanding of the inner workings of VAE, as well as being able to implement one yourself. #AI #MLhttps://t.co/G9J6B87uyt
— Mariya Yao (@thinkmariya) June 26, 2019
A Visual Intro to NumPy and Data Representation
— Jay Alammar جهاد العمار (@jalammar) June 26, 2019
New blog post! A visual look at the workhorse of data analysis, machine learning, and scientific computing in the python ecosystem. We also look at how tables, images, and text are represented by numbers.https://t.co/fTnw1XfYy1 pic.twitter.com/WMLQA23fni
"How to Train Your ResNet"
— ML Review (@ml_review) June 23, 2019
by @MyrtleSoftware @dcpage3
Series of posts on how to efficiently train on a single GPU with insight into the training dynamics and extracted lessons for other settingshttps://t.co/ESdBOP5rKj pic.twitter.com/HMaHanrZGd
ICML 2019 Tutorial: Recent Advances in Population-Based Search for Deep Neural Networks
— hardmaru (@hardmaru) June 19, 2019
Great talk that introduces novelty search, quality diversity, open-endedness, and indirect encoding. I can recommend watching the whole thing!https://t.co/t4R0Nqbjo5https://t.co/htzf3cBFAM pic.twitter.com/WAGjU0h6VP
New blog post! The tools in @ProjectJupyter are *super* extensible, so extensible that there's a new post about all the ways to extend Jupyter: https://t.co/VDLQPZXzAp
— Chris Holdgraf (@choldgraf) June 18, 2019