Visualizing RSS https://t.co/eZ2bbpDzwV pic.twitter.com/qjweZC14eY
— Chris Albon (@chrisalbon) March 19, 2019
Visualizing RSS https://t.co/eZ2bbpDzwV pic.twitter.com/qjweZC14eY
— Chris Albon (@chrisalbon) March 19, 2019
Learn about Automatic Mixed Precision in this guest post from @NVIDIA, which increases training performance by up to 3X.
— TensorFlow (@TensorFlow) March 19, 2019
Read the post here ↓ https://t.co/fn1HWavdBh
How to use if __name__ == “__main__ “ by @discdiver https://t.co/3iz91oC6jO
— Rachel Thomas (@math_rachel) March 19, 2019
Looks like a great overview of the area: "Algorithms for Verifying Deep Neural Networks," Liu et al.: https://t.co/xjCb0ZBuUG
— Miles Brundage (@Miles_Brundage) March 19, 2019
😻 Feat. deploy previews…
— Mara Averick (@dataandme) March 18, 2019
"A Blogdown New Post Workflow w/ Github and Netlify" 👨💻 @grrrck https://t.co/tUSjuStjLB #rstats #rmarkdown
I gave two talks in Harvard's Mind, Brain, & Behavior Distinguished Lecture Series last week.
— Yann LeCun (@ylecun) March 18, 2019
The slides are here:
- 2019-03-13 "The Power and Limits of Deep Learning":... https://t.co/Pp4flTGlZ8
A free PDF version of
— Rafael Irizarry (@rafalab) March 18, 2019
Introduction to Data Science: Data Analysis and Prediction Algorithms with R
is now available on https://t.co/vwMW921269
Thanks to all the readers that through GitHub pull requests and issues improved the first gitbook version, specially to @biochemnerd!
🏀 Ready for some bRacketology? @samfirke's got you covered:
— Mara Averick (@dataandme) March 18, 2019
⛹️♂️ "Making March Madness predictions - a how-to guide"https://t.co/UAc6sgIOW9 #rstats #MarchMadness pic.twitter.com/qyrN4iUGT8
The EM (Expectation-Maximization) Algorithm...
— Kirk Borne (@KirkDBorne) March 16, 2019
Explained in this Statistical Concepts Glossary: https://t.co/nX8PPXrPxl
and Explained in One Picture: https://t.co/O1JT0Ue0fk #abdsc #Statistics #DataScience #StatisticalLiteracy pic.twitter.com/eKTjP9j8Ns
New blog post: Why Operators Are Usefulhttps://t.co/zmkRCxqGme
— Guido van Rossum (@gvanrossum) March 15, 2019
Every generation of ML researchers needs to relearn the lessons of this paper. https://t.co/5mOAm0ruED
— David Pfau (@pfau) March 15, 2019
The JAX Autodiff Cookbook
— hardmaru (@hardmaru) March 14, 2019
JAX's autodiff is very general. It can calculate gradients of numpy functions, differentiating them with respect to nested lists, tuples and dicts. It can also calculate gradients of gradients and even work with complex numbers!https://t.co/J4jz6cTFlI pic.twitter.com/jcGUtlbzJf