A great GitHub repository with tutorials on getting started with PyTorch and TorchText for sentiment analysis in Jupyter Notebooks. What a great resource! https://t.co/XSoUIwwYJ8
— Sebastian Raschka (@rasbt) April 16, 2019
A great GitHub repository with tutorials on getting started with PyTorch and TorchText for sentiment analysis in Jupyter Notebooks. What a great resource! https://t.co/XSoUIwwYJ8
— Sebastian Raschka (@rasbt) April 16, 2019
Nice visualisation of tensors with ipyvolume https://t.co/sEs9ktXhkK https://t.co/Agoh4Osvkn
— Jean Kossaifi (@JeanKossaifi) April 15, 2019
The heart of a @PyTorch training loop with callbacks. By aligning the training code and callback code, you can see exactly what's going on in each.
— Jeremy Howard (@jeremyphoward) April 14, 2019
Formatting code for understanding is too important to leave to automated tools or hard and fast rules. pic.twitter.com/hO5FXgTUC4
Benchmarking Keras and PyTorch Pre-Trained Models. Very nicely done project.https://t.co/3wR1hz74gM
— Jeremy Howard (@jeremyphoward) April 11, 2019
PyTorch acceleration baked into the latest generation of Intel Xeons.
— Yann LeCun (@ylecun) April 9, 2019
That will help speed up the 200 trillion predictions and 6 billion translations Facebook does every day. https://t.co/2gM75pFvrC
Resnets 18, 34, 50, 101, and 152, with all the tweaks from the "Bag of Tricks" paper (and more), in one screen of @pytorch code ij @ProjectJupyter .
— Jeremy Howard (@jeremyphoward) April 7, 2019
Took two days of refactoring to get to this point, but now it's *so* easy to tweak and see exactly what's going on. :) pic.twitter.com/L9HiIvDBTW
A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights" https://t.co/PdREkqj3J2 #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) April 6, 2019
I'm excited to share a new step-by-step #notebook #tutorial on Federated Learning!!
— Andrew Trask (@iamtrask) April 6, 2019
Specifically: how to use #PySyft + @PyTorch to do Federated Learning across machines using WebSockets.
Enjoy!!#100DaysOfMLCodehttps://t.co/85T8BzdB4q pic.twitter.com/YhBGYUfb7m
PyTorch BigGraph: a distributed system for learning large graph embeddings
— PyTorch (@PyTorch) April 2, 2019
- up to billions of entities and trillions of edges
- Sharding and Negative Sampling
- WikiData embeddings (78 mil entities, 4131 relations)
- Blog: https://t.co/IcOitBBWxq
- Code: https://t.co/ESlTmDTwbB pic.twitter.com/jxoEagno1r
TIL about @PyTorch CUDA events, that let you (somewhat) conveniently profile and benchmark GPU code. h/t @ThomasViehmann https://t.co/B7ronEYQ2Y
— Jeremy Howard (@jeremyphoward) March 23, 2019
ConvNets in PyTorch used to diagnose neurodegenerative diseases, by a large team at Mount Sinai Medical Center. https://t.co/1R2ajeL9E3
— Yann LeCun (@ylecun) March 21, 2019
Ever wondered what kind of optimiziations the @PyTorch JIT does to make your scripted RNNs fast? Here is a detailed account of making the JITs fuser (=automatic CUDA kernel generation) deal better with backward graphs (and LSTM backwards in particular): https://t.co/7RIA4PLRAc
— Thomas Viehmann (@ThomasViehmann) March 17, 2019