Big O https://t.co/eZ2bbpDzwV pic.twitter.com/6hDwny42Dq
— Chris Albon (@chrisalbon) November 12, 2019
Big O https://t.co/eZ2bbpDzwV pic.twitter.com/6hDwny42Dq
— Chris Albon (@chrisalbon) November 12, 2019
How to create @drob’s favorite graph:
— Emily Robinson (@robinson_es) November 9, 2019
1. count()
2. Reorder by the count with fct_reorder()
3. Make a bar plot with geom_col()
4. Flip with coord_flip() to easily read the labels #rstatsdc pic.twitter.com/T2BhXB8aNq
Slides for our presentation "AI for healthcare: Scaling access and quality of care for everyone" with @anithakan today at #MLConfSF: https://t.co/cEHhaohifa.
— Xavier Amatriain (@xamat) November 9, 2019
.@data_stephanie showed us how to make amazing race plots with gganimate 📊🤯
— David Robinson (@drob) November 8, 2019
Slides and code here! https://t.co/DFP2LGKxmp#rstatsdc pic.twitter.com/02QvvVlnR8
slides from my keynote talk at #emnlp2019 https://t.co/TG5nTdTCkj
— Kyunghyun Cho (@kchonyc) November 8, 2019
New blog post: Unsupervised cross-lingual representation learning
— Sebastian Ruder (@seb_ruder) November 6, 2019
An overview of learning cross-lingual representations without supervision, from the word level to deep multilingual models. Based on our ACL 2019 tutorial.https://t.co/z0kktVNu8m pic.twitter.com/HcCcB5sDEb
In our newest paper we discuss the frontier of simulation-based inference (aka likelihood-free inference) for a broad audience. We identify three main forces driving the frontier including: #ML, active learning, and integration of autodiff and probprog. https://t.co/R6vMUAnaul pic.twitter.com/ZOmCWcNSCl
— Kyle Cranmer (@KyleCranmer) November 6, 2019
Watch "A guide to modern reproducible data science with R" with @_inundata from rstudio::conf(2019)
— RStudio (@rstudio) November 6, 2019
🎦https://t.co/iuc1jTKQ1F
Learn more about and register for rstudio::conf(2020) in San Francisco at https://t.co/rYJqkzCywm #rstudioconf #rstats #DataScience pic.twitter.com/cTTYvo36VM
Understanding UMAP - an interactive introduction to the algorithm and how to us (and mis-use) it from @_coenen and @adamrpearce . A must read for anyone interested in dimension reduction.https://t.co/yzUoNQrnbR
— Leland McInnes (@leland_mcinnes) November 5, 2019
UMAP is a real advance in visualizing high dimensional data. But how should we read it? And when should we use it? Take a look at this interactive essay by @_coenen and @adamrpearce!https://t.co/k5jwCfcBoD pic.twitter.com/G6H9F0FThL
— Martin Wattenberg (@wattenberg) November 5, 2019
The slides for my talk on human-in-the-loop automl with dabl are now on github: https://t.co/IUq3Z4szpF
— Andreas Mueller (@amuellerml) November 5, 2019
There's also a first release of dabl on pypi: 0.1.1. Please check it out and let me know what you think!
One under appreciated fact highlighted in this article: the theoretical receptive field of modern neural networks is *ridiculously* large. Often thousands of pixels, while the input image ins only hundreds. https://t.co/WQiMnG8v8w
— Chris Olah (@ch402) November 4, 2019