I just finished giving my keynote at Nvidia GTC'21.
β Yann LeCun (@ylecun) April 13, 2021
Slides are here: https://t.co/9GogK14k9D
I just finished giving my keynote at Nvidia GTC'21.
β Yann LeCun (@ylecun) April 13, 2021
Slides are here: https://t.co/9GogK14k9D
π₯Modern Artificial Intelligence 1980s - 2021 by @SchmidhuberAI!
β Radek Osmulski (@radekosmulski) April 13, 2021
This talk delivers! π
β starts with the Big Bang (literally)
β history of everything explained in the first 10 minutes
β only accelerates from there π
π https://t.co/Ya7NwvRaTq
Here is what I learned... pic.twitter.com/dMCfVH2PvD
10 Things You Need to Know About BERT and the Transformer Architecture That Are Reshaping the AI Landscape
β Sebastian Ruder (@seb_ruder) April 9, 2021
This super comprehensive post covers most things that are important in current NLP including BERT, transfer and avocado chairs π₯
by @cathalhoran https://t.co/Hn458pkrk8 pic.twitter.com/srs4bIFrFw
"The likelihood is dead, long live the likelihood"
β Kyle Cranmer (@KyleCranmer) March 22, 2021
An article on simulation-based (aka likelihood-free) inference in the CERN newsletter by Johann Brehmer and me. https://t.co/BS8RaS8Q1E pic.twitter.com/Ga0ryfRkbB
Bayesian methods and what they offer compared to classical econometrics https://t.co/DGG64raX7d
β Andrew Gelman (@StatModeling) March 7, 2021
A new blog post I wrote with Ishan Misra.
β Yann LeCun (@ylecun) March 4, 2021
An overview of Self-Supervised Learning.
We look at recent progress in SSL for vision & explain why SSL is more challenging with high-D continuous signals (images, video) than it is for discrete signals (text).https://t.co/DlL885CPpb
Recent Advances in Language Model Fine-tuning
β Sebastian Ruder (@seb_ruder) February 24, 2021
New blog post that takes a closer look at fine-tuning, the most common way large pre-trained language models are used in practice.https://t.co/A5KYoq5zuw
If you're interested in GNNs for combinatorial tasks (certainly an exciting time!), we've released our 43-page comprehensive survey on the area! + detailed blueprint of algorithmic reasoning in S3.3.https://t.co/F4TG4svKMG
β Petar VeliΔkoviΔ (@PetarV_93) February 19, 2021
with @chrsmrrs @69alodi @lyeskhalil @qcappart & Didier pic.twitter.com/P6TANTgLvr
Iβve recently struggled to understand Python concurrency:
β Hamel Husain (@HamelHusain) February 15, 2021
- How threads & processes work on your OS vs Python
- The role of CPUs, hardware & the GIL
- Understanding beyond rules of thumb
After tons of research, I wrote this w/the answers plus more ππ§΅https://t.co/a6PoEN9dUQ
Regarding generalization in neural nets, one of the "Things that don't matter very much: The optimizer"
β Sebastian Raschka (@rasbt) February 12, 2021
(via Tom Goldstein's talk, "An empirical look at generalization in neural nets" https://t.co/CqLeHLte3H) pic.twitter.com/uzrvDCirrl
what a nice exposition by Marcus Hutter on the learning curve theory: https://t.co/Vq37XJJ0Aw! very nice read.
β Kyunghyun Cho (@kchonyc) February 9, 2021
Check out new work on ML-driven design and exploration of custom accelerators, showing how #MachineLearning facilitates architecture exploration by rapidly identifying high-performing configurations across a range of applications. Learn more β https://t.co/ZQaKSLnNxp
β Google AI (@GoogleAI) February 4, 2021