.@Gradio Demo for YOLOv5 Det v0.4 on @huggingface Spaces
— AK (@ak92501) May 28, 2022
demo: https://t.co/P5n6kg8ZZG
Join the Blocks Party: https://t.co/JnhdF6acqX pic.twitter.com/AU7WXa0QAn
.@Gradio Demo for YOLOv5 Det v0.4 on @huggingface Spaces
— AK (@ak92501) May 28, 2022
demo: https://t.co/P5n6kg8ZZG
Join the Blocks Party: https://t.co/JnhdF6acqX pic.twitter.com/AU7WXa0QAn
"I don't really trust papers out of 'Top Labs' anymore" (https://t.co/Bcks13TAIb) -- interesting commentary on the current direction of DL research. Remember when we could actually try & run latest models?
— Sebastian Raschka (@rasbt) May 28, 2022
Maybe we need a subfield and/or conference on Practical Machine Learning.
Sharing this not only for the hilarious headline but for the astonishing data. Asia is aging at rapid pace. Fertility rates of below 1 might soon be reached in China too. Big parts of Asia are shrinking. India being the big exception. pic.twitter.com/ABnR6POdtq
— Simon Kuestenmacher (@simongerman600) May 27, 2022
When lame machine learning failures happen in TV sports analytics, it's an amusing anecdote.
— Jeremy Howard (@jeremyphoward) May 27, 2022
But the same thing happens in predictive policing, bond and sentence recommendations, health insurance algorithms blocking needed care, etc...
Towards Learning Universal Hyperparameter Optimizers with Transformers
— AK (@ak92501) May 27, 2022
abs: https://t.co/yON7zKZCRy
extensive experiments demonstrate that the OPTFORMER can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates pic.twitter.com/Fhohl2mXRL
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
— AK (@ak92501) May 27, 2022
abs: https://t.co/VHJYoC4ewJ
project page: https://t.co/goVCa1VXbi
github: https://t.co/AXYYJr2vcd pic.twitter.com/d1gQwwgVMw
New blog post about the magic of diffusion guidance!https://t.co/BITNC4nMLM
— Sander Dieleman (@sedielem) May 26, 2022
Guidance powers the recent spectacular results in text-conditioned image generation (DALL·E 2, Imagen), so the time is right for a closer look at this simple, yet extremely effective technique.
Inception Transformer
— AK (@ak92501) May 26, 2022
abs: https://t.co/EoPDBOafSS
iFormer-S hits the top-1 accuracy of 83.4% on ImageNet-1K, much higher than DeiT-S by 3.6%, and even slightly better than much bigger model Swin-B (83.3%) with only 1/4 parameters and 1/3 FLOPs pic.twitter.com/TdtFJfW7w1
Pretraining is All You Need for Image-to-Image Translation
— AK (@ak92501) May 26, 2022
abs: https://t.co/AafrOKGSak
project page: https://t.co/jLrY13lF0N pic.twitter.com/w5fStjw1mm
Fine-Tuning Transformers: Vocabulary Transfer https://t.co/JDmnMseAAq
— Jeremy Howard (@jeremyphoward) May 25, 2022
Why GANs are overkill for NLP
— AK (@ak92501) May 23, 2022
abs: https://t.co/zwjCFxh22z pic.twitter.com/tuM1ufFC7x
Mathematical expressions can now be displayed in Markdown on GitHub using the $$ and $ delimiters - all with the help of the wonderful @MathJax project. https://t.co/831DR5aLay
— GitHub (@github) May 19, 2022