Science is all about getting good at "I don't know"βgetting good at recognising "I don't know", getting good at saying "I don't know", getting good at knowing what to do next.
β Michael A Osborne (@maosbot) August 14, 2021
Science is all about getting good at "I don't know"βgetting good at recognising "I don't know", getting good at saying "I don't know", getting good at knowing what to do next.
β Michael A Osborne (@maosbot) August 14, 2021
Men's Tennis Grand Slam Wins: since 2003 Federer, Nadal, and Djokovic won 20 Grand Slams each (60 in total) and everyone else won 14. I guess they really know their way around a tennis racket... Source: https://t.co/wHAJqpzh9K pic.twitter.com/CsDZqEuidB
β Simon Kuestenmacher (@simongerman600) August 14, 2021
There are now many ways to learn about DALLΒ·E mini, the text to image generator π
β Boris Dayma π₯ (@borisdayma) August 13, 2021
πΉ for viewers, see the presentation: https://t.co/eGV22uhBn0
π for readers, see the report: https://t.co/liX9qN79hj
π₯ play with the demo: https://t.co/OiBcNrqoBv
Vaccines are a gift to humanity. Soon after introduction of vaccination measles were eradicated. How good is it to not have the measles around anymore! Go to the link to see similar charts for other vaccines. Source: https://t.co/kVvTOL3cKj pic.twitter.com/c9Z4cwsftn
β Simon Kuestenmacher (@simongerman600) August 13, 2021
This is a really great NLP Transformer survey, indeed! Also, I like that they included a section focusing on the three main ways to utilize a pre-trained transformer (assuming most of us don't have the infrastructure to train them from scratch): https://t.co/lGULN6vCwV https://t.co/HGRJoGmISM pic.twitter.com/UEHL0MSPwo
β Sebastian Raschka (@rasbt) August 13, 2021
Data practitioners, Iβm BEGGING you: stop working on (and enabling) projects like this https://t.co/n872NFd2JM
β Angela Bassa (@AngeBassa) August 13, 2021
Genji-python 6B is now on @huggingface Spaces using @Gradio
β AK (@ak92501) August 13, 2021
link: https://t.co/4vQm6oVkdq https://t.co/UG2xH9cCxK pic.twitter.com/pVegXpPLhu
Mobile-Former: Bridging MobileNet and Transformer
β AK (@ak92501) August 13, 2021
pdf: https://t.co/Ssr6oFOjy7
abs: https://t.co/lctrhRG2Oq
achieves 77.9% top-1 accuracy at 294M FLOPs, gaining 1.3% over MobileNetV3 but saving 17% of computations pic.twitter.com/ChNT9kJtSy
*Avoid reading the paper*
β Jia-Bin Huang (@jbhuang0604) August 13, 2021
Instead of spending time reading the actual paper, find resources that are much easier to digest, e.g., a talk, a youtube video, teaser results, introductory video, or an overview figure.
Very often understanding the gist of the paper is all you need. pic.twitter.com/DVWhjEAd28
Billion-Scale Pretraining with Vision Transformers for
β AK (@ak92501) August 13, 2021
Multi-Task Visual Representations
pdf: https://t.co/ZPTagL3LzO
abs: https://t.co/TfhdXimw4s
a scalable approach for pretraining with over a billion images in order to improve a production Unified Visual Embedding model pic.twitter.com/bFmlbpD01e
jurassic-1: technical details and evaluation
β AK (@ak92501) August 12, 2021
pdf: https://t.co/FzG56j1kHw
github: https://t.co/i2RQjyLVU9
Jurassic-1 is a pair of auto-regressive language models recently released by AI21 Labs, consisting of J1-Jumbo, a 178B-parameter model, and J1-Large, a 7B-parameter model pic.twitter.com/MS0DGlypTm
ML strategy tip
β Brandon Rohrer (@_brohrer_) August 12, 2021
When you have a problem, build two solutions - a deep Bayesian transformer running on multicloud Kubernetes and a SQL query built on a stack of egregiously oversimplifying assumptions. Put one on your resume, the other in production. Everyone goes home happy.