⚡️ Slides from my super-speed-lightning talk: “What's new in dbplyr 2.0.0?” (the most important part, obvi, being @allison_horst's new logo)https://t.co/dLhheLcehK pic.twitter.com/FXT5glnSaC
— Mara Averick (@dataandme) December 15, 2020
⚡️ Slides from my super-speed-lightning talk: “What's new in dbplyr 2.0.0?” (the most important part, obvi, being @allison_horst's new logo)https://t.co/dLhheLcehK pic.twitter.com/FXT5glnSaC
— Mara Averick (@dataandme) December 15, 2020
Well-written blog post on doing neuroevolution with Keras and PyGAD: https://t.co/q5Uxeo8Lgm
— François Chollet (@fchollet) December 8, 2020
Excited to share my latest blog which focuses on why recommender systems aren't faster on GPU and how you can fix that. TLDR; RecSys models are different from NLP and CV and require specialized dataloaders to get data to the GPU efficiently. https://t.co/Luey8PNy0E
— Even Oldridge (@Even_Oldridge) December 7, 2020
If you missed it yesterday, check out the new tutorial on 🤖 supervised contrastive learning 🤖 on https://t.co/m6mT8SrKDD: https://t.co/VxEKjI5uNr
— François Chollet (@fchollet) December 3, 2020
First walk-thru I've seen showing how to compile @PyTorch and XLA for GPU - very handy for local development and testinghttps://t.co/JmPuQgsYMJ
— Jeremy Howard (@jeremyphoward) November 29, 2020
Our final blog post for the new version of {{tune}} is at: https://t.co/7JmXZl9CCT
— Max Kuhn (@topepos) November 27, 2020
This one focuses on expanded options for parallel processing!
When should you use all available cores? What kind of speed-ups should you expect? #rstats pic.twitter.com/leGc4xJTNa
You train data on GPUs. You visualize data on GPUs. Now you can do #MachineLearning preprocessing too. Keep your end-to-end pipelines all in #GPU memory with @rapidsai's new @scikit_learn-compatible preprocessing support in cuML. https://t.co/kOe6yMo2FW
— RAPIDS AI (@RAPIDSai) November 23, 2020
Some Intuition on Neural Tangent Kernel
— Ferenc Huszár🇪🇺 (@fhuszar) November 20, 2020
new post with a (basic) colab notebook you can play withhttps://t.co/kNP2Ltpff2
Nice overview of the different tools (pandas, dask, rapids, datatable) and file formats (csv, feather, hdf5, jay, parquet, pickle) for working with larger file-based tabular datasets in Python https://t.co/Crma3vAOph
— Ben Hamner (@benhamner) November 16, 2020
A very nice notebook/blog-post from @PatrickPlaten on how to create and train (to really nice results) an Encoder-Decoder model using pre-trained Bert/Roberta/GPT2/etc models as encoders and as decoders 👇 https://t.co/bZAUTaDv5g
— Thomas Wolf (@Thom_Wolf) November 9, 2020
In the last transformers release, we teamed up with @raydistributed @anyscalecompute to provide a simple yet powerful integration for hyperparameter tuning.
— Hugging Face (@huggingface) November 2, 2020
To demonstrate it, @richliaw shows you how to fine tune BERT on MRPC leveraging multiple GPUs. ⤵️https://t.co/TFaVKcRlme
🤩 Fab, quick guide to parameterized reporting (w/ code and video):
— Mara Averick (@dataandme) October 28, 2020
⚡️ “How to Automate PDF Reporting with R” by @mdancho84 https://t.co/IecRtHpIys via @bizScienc #rstats pic.twitter.com/EoiV53jjym