How do you obtain the class activation heatmap for an image classification model in Keras?
— François Chollet (@fchollet) April 26, 2020
Like this: https://t.co/sioWHRo8Sc pic.twitter.com/MdhGpOOF5h
How do you obtain the class activation heatmap for an image classification model in Keras?
— François Chollet (@fchollet) April 26, 2020
Like this: https://t.co/sioWHRo8Sc pic.twitter.com/MdhGpOOF5h
Getting started with JAX (MLPs, CNNs & RNNs)
— hardmaru (@hardmaru) April 24, 2020
A nice notebook tutorial to dive into JAX by @RobertTLangehttps://t.co/FaZeWGY9DDhttps://t.co/rYc9MvMu6h https://t.co/n1abzqwdR5
Gauss Rank Transformation is one of the most famous tricks in the Kaggle lore, pioneered there by Michael Jahrer. Here is a great article by Jiwei Liu, Ph.D. on how to speed it up 100x with @rapidsai and #cupy:https://t.co/jSb5xuF5W9@NVIDIA #rapids #gpu #ai #ds #ml pic.twitter.com/WJlr4IOmTM
— Bojan Tunguz (@tunguz) April 23, 2020
Your 100% up-to-date guide to transfer learning & fine-tuning with Keras: https://t.co/iM7fnZEHNA
— François Chollet (@fchollet) April 19, 2020
Batch normalization involves many gotchas you need to be aware of.
🎉New blog post!
— Rebecca Barter (@rlbarter) April 14, 2020
Tidymodels: tidy machine learning in Rhttps://t.co/hDxb0gzMyi
A concise introduction to machine learning in R using the new(ish) tidymodels pipeline (essentially caret's successor). Thanks to @topepos and team!#rstats #machinelearning
Just refreshed the "intro to Keras for researchers" notebook. It features a hypernetwork example and a VAE example. ~15 min read. Check it out: https://t.co/5Yoe4c7YaM
— François Chollet (@fchollet) April 12, 2020
Now writing an "intro to Keras for engineers" notebook. It looks *very* different :)
Wow, I've never succeeded to explain the concept of gghighlight this well... Thanks so much @allison_horst!!!👍👍👍 https://t.co/ounv6K0X5f
— Hiroaki Yutani (@yutannihilat_en) April 11, 2020
New tutorial!🚀
— Adrian Rosebrock (@PyImageSearch) March 30, 2020
Autoencoders for Content-based Image Retrieval (i.e., image search engines) with #Keras and #TensorFlow 2.0:https://t.co/KOBj1wCWwe 👍#Python #DeepLearning #MachineLearning #ArtificialIntelligence #AI #DataScience #ComputerVision #OpenCV pic.twitter.com/vflrwu6yhl
If you make a line plot, make the legend order match some notion of the data order (or use direct labelling). Linked gist shows how to do this in #rstats with forcats::fct_reorder2().https://t.co/1IHbG7wIG8 pic.twitter.com/hadqsmfLO6
— Jenny Bryan (@JennyBryan) March 30, 2020
Neural networks are notoriously hard debug because the code doesn't crash, raise an exception, or slow down. They simply converge to poor results.@ayushthakur0 shows you how to debug your neural networks!#machinelearning #deeplearning #100daysofmlcodehttps://t.co/EeJFshT6Wi
— lavanyaai (@lavanyaai) March 26, 2020
https://t.co/HaB5BQbREl
— Martin Görner (@martin_gorner) March 26, 2020
👍👍👍notebook in the Kaggle Jigsaw multilingual toxic comment detection comp. He showcases 5 NLP models: CNN, LSTM with attention, BERT, even capsules !
Also shows how to use a DistillBert model from the transformers library by HuggingFace. All on TPU.
PyTorch supports 8-bit model quantization using the familiar eager mode Python API to support efficient deployment on servers and edge devices. This blog provides an overview of the quantization support on PyTorch and its incorporation with TorchVision. https://t.co/GqzkJLDlDp
— PyTorch (@PyTorch) March 26, 2020