Humanity’s CO2 emissions, visualized. #dataviz
— Randy Olson (@randal_olson) April 14, 2022
Not looking good…
Source: https://t.co/Qx7dY2o4op pic.twitter.com/3BfCx2DcAn
Humanity’s CO2 emissions, visualized. #dataviz
— Randy Olson (@randal_olson) April 14, 2022
Not looking good…
Source: https://t.co/Qx7dY2o4op pic.twitter.com/3BfCx2DcAn
Information on recommender systems is hard to come by
— Radek Osmulski 🇺🇦 (@radekosmulski) April 14, 2022
But did you know that @eugeneyan has put together a list of
✅ 68 RecSys papers and articles
✅ 57 papers and articles on Search and Ranking
This is amazing 🥳🍾 thank you so very much for this!!!https://t.co/qdvwZ1zfLc pic.twitter.com/c1Q9Vx3YNL
"Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at Reliable OOD Detection" -- an interesting paper on this subject where the authors have a theoretical explanation that ReLU and Softmax are (partly) to blame: https://t.co/STbOjj9YJx pic.twitter.com/pEoztyQRHw
— Sebastian Raschka (@rasbt) April 14, 2022
InCoder: A Generative Model for Code Infilling and Synthesis
— AK (@ak92501) April 14, 2022
abs: https://t.co/qAbrJzgVkw
project page: https://t.co/Sp87l2oGix pic.twitter.com/U0iNz40ZWq
A Review on Language Models as Knowledge Bases
— AK (@ak92501) April 14, 2022
abs: https://t.co/C70a1YM8AX pic.twitter.com/Ce84fhz5yX
People who are competing on Kaggle. Are you calibrating your classifiers / proba scores at all? (E.g., using CalibratedClassifierCV, https://t.co/cS7zI5CFTX.) Do you find that it noticeably improves your ROC AUC? (cc @tunguz @svpino @JFPuget)
— Sebastian Raschka (@rasbt) April 13, 2022
Few-shot Learning with Noisy Labels
— AK (@ak92501) April 13, 2022
abs: https://t.co/wGnBAoCH8D
results show that TraNFS is on-par with leading FSL methods on clean support sets, yet outperforms them, by far, in the presence of label noise pic.twitter.com/HdIFPzpQKM
What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
— AK (@ak92501) April 13, 2022
abs: https://t.co/Lk71qAPdzm
github: https://t.co/hIzImwwFoD pic.twitter.com/NSI294Gs7M
Chinchilla: A 70 billion parameter language model that outperforms much larger models, including Gopher. By revisiting how to trade-off compute between model & dataset size, users can train a better and smaller model. Read more: https://t.co/RaZGUclBYQ 1/3 pic.twitter.com/TNWI1RLloA
— DeepMind (@DeepMind) April 12, 2022
"Machine Learning State-of-the-Art with Uncertainties" -- great paper by @psteinb_ & @helmholtz_ai
— Sebastian Raschka (@rasbt) April 12, 2022
making a case for confidence intervals in ML benchmarks, or really any ML work. And no, adding CI's (e.g. via normal approx.) doesn't have to be expensive :) https://t.co/pgW6ILD7JW https://t.co/n1LMuCx1RQ
No Token Left Behind: Explainability-Aided Image Classification and Generation
— AK (@ak92501) April 12, 2022
abs: https://t.co/n5Jeu5Q8c7 pic.twitter.com/hLvkQgVFrr
Improve your #Python coding workflow with the #JupyterLab inspector ☺️https://t.co/2qX5UuwXn0@timdumol @SylvainCorlay @ccordoba12 @CAM_Gerlach @ProjectJupyter pic.twitter.com/f4glWEK3ak
— Martin Renou (@martinRenou) April 11, 2022