I wrote a brief, non-technical (mostly) article about the potential for offline RL to change how we make decisions with data:https://t.co/mBnNmtZvNv
— Sergey Levine (@svlevine) September 15, 2020
I wrote a brief, non-technical (mostly) article about the potential for offline RL to change how we make decisions with data:https://t.co/mBnNmtZvNv
— Sergey Levine (@svlevine) September 15, 2020
A recent AutoML eval.: "Can AutoML outperform humans? An evaluation on popular OpenML datasets using AutoML Benchmark" Answer: Yes, sometimes, and if not, AutoML is not that far off. Good news, I guess; AutoML is not as bad as most people think it is. https://t.co/v3Xp3YaGTg pic.twitter.com/7o27kNlIZM
— Sebastian Raschka (@rasbt) September 4, 2020
A Wholistic View of Continual Learning: https://t.co/DnOrnfpAYu
— Denny Britz (@dennybritz) September 4, 2020
If you're interested in Continual Learning and Active Learning, this paper comes with a neat taxonomy! pic.twitter.com/WJYWoynMvQ
New on the @rstudio #AI blog: An introduction to weather prediction with #DeepLearning https://t.co/AgjliiLlqj #rstats
— RStudio (@rstudio) September 1, 2020
The samples from these new diffusion models look great indeed https://t.co/RXelMlrzX9
— Andrej Karpathy (@karpathy) August 28, 2020
Oh wow, I had missed this! Really interesting open #recsys course by Google that goes all the way from basic content-based and matrix factorization to Deep Neural Nets (all of it with code on Colab!): https://t.co/jMY5u3OJBk
— Xavier Amatriain - BLM (@xamat) August 27, 2020
"Which Optimizer should I use for my Machine Learning Project?" https://t.co/ea3d92diud
— Sebastian Raschka (@rasbt) August 14, 2020
That's a neat quick reference table (personally, I am lazy and usually just stick to ADAM with default settings, works pretty well out of the box most of the time): pic.twitter.com/BbdKlVq1Rk
A very short history of some times we solved AIhttps://t.co/UZ7Opi8xIk
— Julian Togelius (@togelius) August 3, 2020
Indexing your data sets and documenting it is another example of a problem lots of companies have but there’s no great tool for it, so everyone builds their own. Here’s Shopify’s: https://t.co/8LvXIkaHqt
— Erik Bernhardsson (@fulhack) August 1, 2020
Can supervised contrastive learning really outperform cross-entropy?
— Lavanya (@lavanyaai) July 30, 2020
In this report, @RisingSayak and Shweta introduce SCL and present a plethora of experiments to validate its efficiency.#machinelearning #deeplearning #100daysofmlcodehttps://t.co/D9YMZ8Uy5i
Just finished a trip down memory lane. Deep Learning's Most Important Ideas - A Brief Historical Review: https://t.co/XpBxi9xrCF
— Denny Britz (@dennybritz) July 29, 2020
If you want to get into Deep Learning research, these are what I consider the most important papers to start with!
Good post on the use of BPE (byte pair encodings) for I/O of language models, pointing out subtle under-the-hood detail with unintuitive repercussions https://t.co/vZ5R5lqteP e.g. hello,Hello,HELLO all tokenize completely differently, and possibly also of different # tokens each
— Andrej Karpathy (@karpathy) July 28, 2020