👍 code-through w/ nice viz, too!
— Mara Averick (@dataandme) March 25, 2019
"awtools Update: Visualizing Natural Disaster Cost" 👨‍💻 @awhstinhttps://t.co/daJD2DLZur #rstats #dataviz pic.twitter.com/sTwWKECjTX
👍 code-through w/ nice viz, too!
— Mara Averick (@dataandme) March 25, 2019
"awtools Update: Visualizing Natural Disaster Cost" 👨‍💻 @awhstinhttps://t.co/daJD2DLZur #rstats #dataviz pic.twitter.com/sTwWKECjTX
Should be super easy to install & use (see the image above)
— Thomas Wolf (@Thom_Wolf) March 21, 2019
👉 pip install pytorch-pretrained-biggan
👉 https://t.co/L2RGZUWeIc
Implemented from the raw computational graph of the TF Hub module. Pretrained weights are the ones pre-trained by @ajmooch at DeepMind. Thanks Andrew!
Don’t forget to check out the ML details and use the pretrained model in your own app, details here: https://t.co/CpZ8cYsx4A
— Nikhil Thorat (@nsthorat) March 21, 2019
Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly
— Brandon Amos (@brandondamos) March 18, 2019
K. Kandasamy et al.
Python Library: https://t.co/xfyMkVs6Z3
Docs: https://t.co/vf8jXVkP5o pic.twitter.com/ZMlOGvyhRT
Notes:
— John Burn-Murdoch (@jburnmurdoch) March 18, 2019
• Full code for the animation here on @observablehq https://t.co/dQzIyU9Jlp
• I’ve tried to build this in a fairly reproducible way, such that you give it a dataset containing entity, year, value, and it does the rest
• Feedback welcome!#dataviz
Collections of Papers & Code on Domain Adaptationhttps://t.co/zpCy4cm5vG pic.twitter.com/qSSPYnf78k
— ML Review (@ml_review) March 17, 2019
Reminder that it's super easy to use this Colab by following the instructions at the top, running the cells in order, and changing the dataset link and filename to a different .txt file :) https://t.co/mkyFTJ5V8Z
— Miles Brundage (@Miles_Brundage) March 17, 2019
Some GPT-2-117M outputs after fine-tuning on memorable movie quotes from (https://t.co/vZyr8mjNki), using @roadrunning01's Colab. The format of the training data was: [movie title] [line break] [quote] [sometimes another quotes], sometimes with "<p>"/"</p>" thrown in the mix. pic.twitter.com/CjnoAr8L7t
— Miles Brundage (@Miles_Brundage) March 16, 2019
A generative chatbot project that uses a LSTM-based seq2seq model: https://t.co/vxrfHYxQwK
— François Chollet (@fchollet) March 14, 2019
Code to reproduce the domain transfer experiment in “Deep Learning for Classical Japanese Literature” paper.
— hardmaru (@hardmaru) March 13, 2019
The model will take a pixel image of an old style Kuzushiji Kanji and try to predict how to write the modern version as a sequence of pen strokes. https://t.co/xqbd8g7SXH pic.twitter.com/ujFFOU0lim
Fast similarity search requires quantization.
— Yann LeCun (@ylecun) March 5, 2019
FAIR ICLR paper+code: trains a net to map input distributions to maximally-uniform distributions on the sphere while preserving neighborhood relationships.
Paper: https://t.co/6yBW6iRAEb
Code: https://t.co/U6FxKA2JRw@alexsablay
A new, multilingual version of the Universal Sentence Encoder (USE) model is now available on #TFHub!
— TensorFlow (@TensorFlow) March 5, 2019
Check it out here → https://t.co/N1JzuuX4MR pic.twitter.com/xPD1d9AUxd