This is an absolutely first-class article with a much clearer explanation of the joy of developing with @ProjectJupyter and nbdev then I've ever managed! :) https://t.co/uaXHdIgYiU
— Jeremy Howard (@jeremyphoward) January 15, 2020
This is an absolutely first-class article with a much clearer explanation of the joy of developing with @ProjectJupyter and nbdev then I've ever managed! :) https://t.co/uaXHdIgYiU
— Jeremy Howard (@jeremyphoward) January 15, 2020
I've started to upload the videos for the Neural Nets for NLP class here: https://t.co/H1jHhqwTWz
— Graham Neubig (@gneubig) January 15, 2020
We'll be uploading the videos regularly throughout the rest of the semester, so please follow the playlist if you're interested. https://t.co/aYA0drnKiP
🐈 Nice code-through (w/ {patchwork}, too)
— Mara Averick (@dataandme) January 14, 2020
"Building Color-Palette Proofs of Concept w/ {purrr} and {ggplot2}" 🎨 @khailper https://t.co/3EZbBKbgfK #rstats pic.twitter.com/R07MVfGTgC
Learn how to automate most of the infrastructure work required to deploy PyTorch models in production using Cortex, an open source tool for deploying models as APIs on AWS. https://t.co/m8ErtF28I3 pic.twitter.com/Zz6uLqH5CU
— PyTorch (@PyTorch) January 14, 2020
At @AiWamri medical imaging we often need to make a little data go a long way. One under-appreciated approach for this is self-supervised learning. It's almost magical!
— Jeremy Howard (@jeremyphoward) January 14, 2020
I've written a little overview to help you get started. Let me know if you try it :)https://t.co/3mMZXcOuvQ
Amazing resource to make sense and the most of reaction time distributions:https://t.co/Z38YJbPzmT pic.twitter.com/MZmfGiLTVG
— Guillaume Rousselet (@robustgar) January 13, 2020
Amazing! This real working deep learning image classifier is running, for free, on https://t.co/KxINILgEDv, is written entirely in @ProjectJupyter notebooks with ipywidgets, & deployed with Voila.
— Jeremy Howard (@jeremyphoward) January 13, 2020
13 lines of code :) Try it here:https://t.co/yQfVUZGaco pic.twitter.com/hR8nJFy6PX
“Meet AdaMod: a new deep learning optimizer with memory” by Less Wright https://t.co/TmtflP9s6h
— Jeremy Howard (@jeremyphoward) January 11, 2020
"I don't know why but I teared up after understanding something so clearly. Like just how beautiful the intuition is" -- this comment on my ConvNet video tutorial made me tear up too.https://t.co/DhmouBC22Y
— Brandon Rohrer (@_brohrer_) January 10, 2020
Missed this great article on the Clever Hans effect in NLP
— Thomas Wolf (@Thom_Wolf) January 10, 2020
Many SOTA results we get with Bert-like models are due to these models "breaking" our datasets –in a bad sense– by exploiting their weaknesses@benbenhh's piece has nice overview & advice
Also follow @annargrs on this https://t.co/xPb5Nj87z9
Very happy to share our latest work accepted at #ICRL2020: we prove that a Self-Attention layer can express any CNN layer. 1/5
— Jean-Baptiste Cordonnier (@jb_cordonnier) January 10, 2020
📄Paper: https://t.co/Cm61A3PWRA
🍿Interactive website : https://t.co/FTpThM3BQc
🖥Code: https://t.co/xSfmFCy0U2
📝Blog: https://t.co/3bp59RfAcj pic.twitter.com/X1rNS1JvPt
Great talk! Explains how to vectorize slow pandas code. Here: replacing .apply when working w conditional statements.
— Sebastian Raschka (@rasbt) January 10, 2020
Was guilty of using .apply myself a lot recently because I thought of it as elegant. Turns out my old & actually preferred method, numpy.where, is a lot faster! https://t.co/bFzRAMljqi