The BigGAN generators from our paper https://t.co/QUYlE9IBsE are now available on TF Hub (https://t.co/GHM9pIgQPw). Try the Colab demo at: https://t.co/Ynyb9T9AAD
β DeepMind (@DeepMindAI) November 12, 2018
The BigGAN generators from our paper https://t.co/QUYlE9IBsE are now available on TF Hub (https://t.co/GHM9pIgQPw). Try the Colab demo at: https://t.co/Ynyb9T9AAD
β DeepMind (@DeepMindAI) November 12, 2018
Deep Learning exploded when open tools made it *easy*
β Trask (@iamtrask) November 12, 2018
I'm extremely excited to share our new @NipsConference paper feat:
- Federated Learning
- Differential Privacy
- Encrypted Data/Models#privacy made... *easy* π₯³
Paper: https://t.co/yLwpjiXXMX
Code: https://t.co/gKzzfw0mIV pic.twitter.com/iJ29MzHN86
"FloWaveNet : A Generative Flow for Raw Audio" from Seoul National University
β PyTorch (@PyTorch) November 9, 2018
Their research was scooped by a few days, yet, they spent the time to write and release a paper and code. Great spirit folks!
Paper: https://t.co/G7GIcld23q
Code: https://t.co/oQEUV5WqAx pic.twitter.com/Hx1SLCoG4P
Here is an op-for-op @PyTorch re-implementation of @GoogleAI's BERT model by @sanhestpasmoi, @timrault and I.
β Thomas Wolf (@Thom_Wolf) November 5, 2018
We made a script to load Google's pre-trained models and it performs about the same as the TF implementation in our tests (see the readme).
Enjoy!https://t.co/dChmNPGPKO
The multilingual (many languages, one encoder) version of @GoogleAI's BERT appears to be online! Happy to see results on our new XNLI cross-lingual transfer dataset, too!https://t.co/2YL9hSUb5j
β Sam Bowman (@sleepinyourhat) November 5, 2018
As upsetting as a cancelled flight is (couldn't make it to ODSC this wknd π° ) I made the best out of the unexpected free time & finally toyed around with PyTorch's DataParallel. With 4 GPUs I get >3x speedup over 1. Super neat! Also upl. a little toy ex.: https://t.co/DdPtm7JHxm
β Sebastian Raschka (@rasbt) November 4, 2018
Fully convolutional watermark removal - useful project and good resultshttps://t.co/OQkO6TCqWN
β Jeremy Howard (@jeremyphoward) November 3, 2018
We have released @TensorFlow code+models for BERT, a brand new pre-training technique which is now state-of-the-art on a wide array of natural language tasks. It can also be used on many new tasks with minimal changes and quick training! https://t.co/rLR6U7uiPj
β Google AI (@GoogleAI) November 2, 2018
A Keras-based project for developing motion planning algorithms for robots (grasping, stacking, etc.): https://t.co/Am6WMMP2ua
β FranΓ§ois Chollet (@fchollet) November 1, 2018
"You may not need attention" by Press and Smith with PyTorch code at https://t.co/sqBZ1CFq2u https://t.co/2oxk8OORPt
β PyTorch (@PyTorch) November 1, 2018
Code and pretrained weights for BERT are out now.
β Sebastian Ruder (@seb_ruder) October 31, 2018
Includes scripts to reproduce results. BERT-Base can be fine-tuned on a standard GPU; for BERT-Large, a Cloud TPU is required (as max batch size for 12-16 GB is too small).https://t.co/CWv8GMZiX5
We released code for our new #NIPS2018 paper on Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs!https://t.co/OJ0n9QolkI
β Andrew Gordon Wilson (@andrewgwils) October 31, 2018
We show the local optima are connected by simple high accuracy curves, like wormholes. With @tim_garipov, @Pavel_Izmailov, Podoprikhin, Vetrov.