Good discussion of the “Bayesian Trap” in testing for uncommon conditions. https://t.co/DekdoSUfM5
— JD Long (@CMastication) April 4, 2020
Good discussion of the “Bayesian Trap” in testing for uncommon conditions. https://t.co/DekdoSUfM5
— JD Long (@CMastication) April 4, 2020
Supervised Deep Learning Rule Of Thumb https://t.co/eZ2bbpDzwV pic.twitter.com/tE4dOnmqZW
— Chris Albon (@chrisalbon) April 1, 2020
This is the model from Report 13 of the Imperial College COVID-19 response team. It fits the death data jointly from 11 European countries to estimate the reproduction number and the effect of lockdowns. Such a remarkable piece of @mcmc_stan https://t.co/Hhmbt3KVcQ
— Gilles Louppe (@glouppe) March 30, 2020
New tutorial!🚀
— Adrian Rosebrock (@PyImageSearch) March 30, 2020
Autoencoders for Content-based Image Retrieval (i.e., image search engines) with #Keras and #TensorFlow 2.0:https://t.co/KOBj1wCWwe 👍#Python #DeepLearning #MachineLearning #ArtificialIntelligence #AI #DataScience #ComputerVision #OpenCV pic.twitter.com/vflrwu6yhl
If you make a line plot, make the legend order match some notion of the data order (or use direct labelling). Linked gist shows how to do this in #rstats with forcats::fct_reorder2().https://t.co/1IHbG7wIG8 pic.twitter.com/hadqsmfLO6
— Jenny Bryan (@JennyBryan) March 30, 2020
I’m still struggling to understand hypothesis testing . . . leading to a more general discussion of the role of assumptions in statistics https://t.co/jRaV7m4mR8
— Andrew Gelman (@StatModeling) March 28, 2020
Johns Hopkins is running a free two-week epidemiology course, so we can make some sense out of the messy data out there https://t.co/13TzwuGjd6
— Nathan Yau (@flowingdata) March 26, 2020
Neural networks are notoriously hard debug because the code doesn't crash, raise an exception, or slow down. They simply converge to poor results.@ayushthakur0 shows you how to debug your neural networks!#machinelearning #deeplearning #100daysofmlcodehttps://t.co/EeJFshT6Wi
— lavanyaai (@lavanyaai) March 26, 2020
https://t.co/HaB5BQbREl
— Martin Görner (@martin_gorner) March 26, 2020
👍👍👍notebook in the Kaggle Jigsaw multilingual toxic comment detection comp. He showcases 5 NLP models: CNN, LSTM with attention, BERT, even capsules !
Also shows how to use a DistillBert model from the transformers library by HuggingFace. All on TPU.
PyTorch supports 8-bit model quantization using the familiar eager mode Python API to support efficient deployment on servers and edge devices. This blog provides an overview of the quantization support on PyTorch and its incorporation with TorchVision. https://t.co/GqzkJLDlDp
— PyTorch (@PyTorch) March 26, 2020
Visualizing RSS https://t.co/eZ2bbpDzwV pic.twitter.com/l8ZV7zBo7J
— Chris Albon (@chrisalbon) March 26, 2020
Can deep neural nets forecast weather accurately? Check out our #NeuralWeatherModel, MetNet, which outperforms physical models at up to eight hour forecasts and runs in a matter of seconds. Learn more in the blog post below! https://t.co/7upa4rxrdQ
— Google AI (@GoogleAI) March 25, 2020