Good reading on AI alignment, I've been wondering how one could steer LLMs with an equivalent of Three Laws of Robotics https://t.co/82X9F93qRw
— Andrej Karpathy (@karpathy) December 17, 2022
Good reading on AI alignment, I've been wondering how one could steer LLMs with an equivalent of Three Laws of Robotics https://t.co/82X9F93qRw
— Andrej Karpathy (@karpathy) December 17, 2022
Performing Neural Architecture Search (NAS) to identify optimal NN architectures can be cumbersome, time-consuming, and requires expertise.
— PyTorch (@PyTorch) November 23, 2022
Read more in this 🧵for how to use Multi-Objective Bayesian NAS in Ax to overcome these challenges.
1/4 pic.twitter.com/5KIgrXQDCq
There's a confusing amount of plots for visualizing SHAP.
— Christoph Molnar (@ChristophMolnar) November 17, 2022
I put together a cheat sheet with the most important SHAP plots (+interpretation) for explaining machine learning models.
Get it here: https://t.co/iNILd5EhER
It's free!
Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding
— AK (@_akhaliq) November 15, 2022
abs: https://t.co/C6Dq7bX9VK
project page: https://t.co/6nTi6yb8xN
github: https://t.co/mnfKG8Nhj2 pic.twitter.com/DSxii6Exim
Highly recommend this talk by @marksaroufim, "Dealing with Career Stagnation: my Machine Learning Story"
— Hamel Husain (@HamelHusain) November 14, 2022
This talk is amazingly relevant again today: https://t.co/H0VqhgZfim
ZerO Initialization: Initializing Neural Networks with only Zeros and Ones
— hardmaru (@hardmaru) November 11, 2022
A fully deterministic initialization scheme which sets the weights to only 0s and 1s can achieve SOTA on various datasets including ImageNet. Maybe random weights are unnecessary.https://t.co/t6u3S6Dj71 pic.twitter.com/XgszDvat6T
You can read more about it in the following blog post by @DannyCEbanks: https://t.co/oHlAoqtSab
— Bojan Tunguz (@tunguz) November 5, 2022
Link to Prof. Alvarez’ CalTech profile: https://t.co/HdXau1uDVY#PoliticalScience #POlitics #SocialScience #DataScience #MachineLearning #NaturalLanguageProcessing #DS #ML #NLP
8/8
Over the past year and a half I’ve had the privilege of collaborating with a brilliant @Caltech Computational Political Science group centered around prof. @rmichaelalvarez.
— Bojan Tunguz (@tunguz) November 5, 2022
1/8 pic.twitter.com/Gg3xrhvSjH
Two rules for Einstein summation:
— @radek@sigmoid.social (Mastodon) 🇺🇦 (@radekosmulski) November 5, 2022
• dimensions in inputs with the same name get multiplied
• dimensions omitted from output get summed
inputs: left of ->
outputs: right of ->
Anything I am missing? pic.twitter.com/AbKn1BDOYl
What an excellent video on Fast Fourier Transform by @veritasium ! 👏https://t.co/Gzhu8iOJ85
— Aurélien Geron (@aureliengeron) November 3, 2022
The math of Gaussian Mixture Model Clustering can be tough for undergrads to grasp, but it gives a TON of insight into how GMM works!
— Chelsea Parlett-Pelleriti (@ChelseaParlett) October 26, 2022
I made this GMM math worksheet to do with my class. https://t.co/oAYyMvNSRE pic.twitter.com/6ApX4GBvj2
😨 Training an Object Detection Model is a very challenging task and involves tweaking so many knobs
— @farid@sigmoid.social (Mastodon) (@ai_fast_track) October 20, 2022
Here is an exhaustive 🎁 tips & tricks list 🎁 that you could use to boost your model performance
🧵 pic.twitter.com/sOvEUhCCwg