Nice video by Abhishek about his AutoNLP project! Check it out https://t.co/dzavvR74vl
— Thomas Wolf (@Thom_Wolf) February 28, 2021
Nice video by Abhishek about his AutoNLP project! Check it out https://t.co/dzavvR74vl
— Thomas Wolf (@Thom_Wolf) February 28, 2021
Thinking about doing a periodic thread of lesser-known seaborn features or tricks.
— Michael Waskom (@michaelwaskom) February 26, 2021
First up: most functions will do something sensible for many types of input, like this dict of Series with different lengths.
(Surprised? You may want to read this: https://t.co/VF2ugu9ICO) pic.twitter.com/gMjl3WKmzo
Thanks again @PyDataMTL for inviting me to speak tonight. Here are my slides from the talk on how to finetune (large) Transformers models.https://t.co/H2P2PvlfS4
— Sylvain Gugger (@GuggerSylvain) February 26, 2021
I've added a. quick intro to ggfx for those curious about how to use it https://t.co/PNegGjQy9I
— Thomas Lin Pedersen (@thomasp85) February 24, 2021
Recent Advances in Language Model Fine-tuning
— Sebastian Ruder (@seb_ruder) February 24, 2021
New blog post that takes a closer look at fine-tuning, the most common way large pre-trained language models are used in practice.https://t.co/A5KYoq5zuw
20 hours of new lectures on Deep Learning and Reinforcement Learning with lots of examples https://t.co/3cXv2E0rcd
— /MachineLearning (@slashML) February 23, 2021
8 reasons machine learning projects fail - by @elenasamuylova
— Alexey Grigorev (@Al_Grigor) February 21, 2021
🔸 Doing ML for wrong reasons
🔸 ML not needed
🔸 Bad data
🔸 Poor problem framing
🔸 Model ≠ product
🔸 Bad infrastructure
🔸 No trust from stakeholders
🔸 Production failures
Solution? 👉 https://t.co/mvs7sJyxDe pic.twitter.com/poTAzwWT4b
.@GradioML is a neat library for interacting with a trained model. It's useful for debugging and for giving collaborators the an easy way to interact with the model.
— Anthony Goldbloom (@antgoldbloom) February 21, 2021
Here's a notebook to try it:https://t.co/cwzzdpCTbZ
(Hit "Copy and Edit" and then run the notebook.) pic.twitter.com/FsVKULbLaV
This is a brand new podcast dedicated to #MLOps. I had the privilege of kicking it off with some very talented friends 👇
— Hamel Husain (@HamelHusain) February 20, 2021
If you are new to MLOps, definitely check this out https://t.co/SEfA7Zjgl7
Ray is an open-source library for parallel and distributed Python. It can be paired with #PyTorch to rapidly scale machine learning applications. Learn more below: https://t.co/UhfPxSEoWm
— PyTorch (@PyTorch) February 19, 2021
If you're interested in GNNs for combinatorial tasks (certainly an exciting time!), we've released our 43-page comprehensive survey on the area! + detailed blueprint of algorithmic reasoning in S3.3.https://t.co/F4TG4svKMG
— Petar Veličković (@PetarV_93) February 19, 2021
with @chrsmrrs @69alodi @lyeskhalil @qcappart & Didier pic.twitter.com/P6TANTgLvr
New in the scikit-learn main branch: a short tutorial on assessing and tuning quantile regression models with the new pinball loss metric:https://t.co/V3naQcxwz6
— Olivier Grisel (@ogrisel) February 19, 2021
before and after hyper-parameter tuning on data with asymmetric, heteroscedastic noise: pic.twitter.com/jVoPfENlxp