Pytorch implementation of CoordConv https://t.co/hRo7b2VdRg #deeplearning #machinelearning #ml #ai #neuralnetworks #pytorch
— PyTorch Best Practices (@PyTorchPractice) July 14, 2018
Pytorch implementation of CoordConv https://t.co/hRo7b2VdRg #deeplearning #machinelearning #ml #ai #neuralnetworks #pytorch
— PyTorch Best Practices (@PyTorchPractice) July 14, 2018
"remarkable architecture search efficiency (with 4 GPUs: 2.83% error on CIFAR10 in 1 day; 56.1 perplexity on PTB in 6 hours)"
— PyTorch (@PyTorch) June 26, 2018
Try it now from: https://t.co/8khIix99mahttps://t.co/HFuW0II5Hl
Check out Distiller, our @PyTorch based package for neural network compression research at https://t.co/px4g8yCrjS https://t.co/Fews6mK3X6
— Gal Novik (@gal_novik) June 22, 2018
Here's a sneak peek at our new Federated Learning interfaces using @PyTorch and PySyft.
— OpenMined (@openminedorg) June 21, 2018
Try Federated Learning Using OpenMined: https://t.co/bSMgUZjgL1
Come Join our Slack: https://t.co/e1avrEWtIo pic.twitter.com/sHeQ2568gs
I'm happy to announce the first release of ignite - a high level library for @pytorch helping you write compact but full-featured training loops in a few lines of code! Check it out at https://t.co/d9BBOEXMXF
— Alykhan Tejani (@alykhantejani) June 20, 2018
Apex is a PyTorch extension from @nvidia that makes it easy to use mixed-precision training and use Volta Tensor Cores to full potential. Read more at: https://t.co/9xuOLAyGJT
— PyTorch (@PyTorch) June 19, 2018
Congrats @jnhwkim and team on winning the VQA challenge at #CVPR2018.
— PyTorch (@PyTorch) June 19, 2018
Read their paper "Bilinear Attention Networks" at https://t.co/sqXNysYxcv
PyTorch based code at: https://t.co/XIOwVHfLtO https://t.co/FGOR3Pe9Rl
@PyTorch geometric by @rusty1s et al.https://t.co/rhyvOyjppS pic.twitter.com/mOWWuFClUD
— Brandon Amos (@brandondamos) June 18, 2018
Tools for PyTorch https://t.co/swXLBCQQuv #pytorch #deeplearning #machinelearning #ml #ai #neuralnetworks
— PyTorch Best Practices (@PyTorchPractice) June 16, 2018
NCRF++ : A neural CRF++ toolkit for sequence labeling tasks. Works pretty much similar to Taku's CRF++ package, but built with #PyTorch! #nlproc https://t.co/05ZL6bEcoO pic.twitter.com/ME8k7ruMdA
— Delip Rao (@deliprao) June 16, 2018
code and pre-trained models to reproduce the recent paper "Scaling Neural Machine Translation" (https://t.co/mrRDmlwax1) where we train on up to 128 GPUs with half precision floating point operations as well as delayed batching.
— PyTorch (@PyTorch) June 16, 2018
FairSeq Toolkit - Major Update
— PyTorch (@PyTorch) June 16, 2018
- Distributed Training
- Transformer models (big Transformer on WMT Eng-German in < 5 hours on DGX-1)
- Fast Inference: translations @ 92 sent/sec for big Transformer
- Story Generation
Read more at Michael Auli's post: https://t.co/eptKDuh0WI pic.twitter.com/d4OtJZpdFw