Really nice update.— Soumith Chintala (@soumithchintala) May 7, 2020
With a semi-automated ML pipeline that extracts results and tables from papers, they're able to scale much faster.
They released the code and paper to extract results from papers, at: https://t.co/l7caAau4RK and https://t.co/XKHCTwjadK https://t.co/pnvPKmwCqY
CNN Explainer is an interactive visualization tool for learning purposes. It runs a pre-tained CNN in the browser and lets you explore the layers and operations: https://t.co/Zi7lieHeIM— Denny Britz (@dennybritz) May 1, 2020
Paper: https://t.co/6RhexIv52U pic.twitter.com/KKmnpCsIxt
Much of the world’s information is stored in table form. TAPAS, a new model based on the #BERT architecture, is optimized for parsing tabular data over a wide range of structures and domains for application to question-answering tasks. Learn more below: https://t.co/U9zRxaUvik— Google AI (@GoogleAI) April 30, 2020
The Once-For-All (OFA) network from Han Cai, @SongHan_MIT et. al.— PyTorch (@PyTorch) April 29, 2020
Train one flexible network, deploy subsets that specialize to Mobile, Cloud, IOT efficiently, without retraining!
News: https://t.co/hQk7FTzQLU pic.twitter.com/MegrXNwBUA
Just pushed the code of a chrome extension that turns every Instagram posts into 3d images using #3DPhotoInpainting. No GPU needed thanks to @GoogleColab but a bit of patience to set it up ;-)— Cyril Diagne (@cyrildiagne) April 19, 2020
Demo: @parrstudio's amazing work
Code: https://t.co/59yJUvRHxE#AIUX #Interaction #ML pic.twitter.com/86mMBWdm7V
Another Transformer variant with lower computational complexity, suitable for long-range tasks, is Sparse Sinkhorn Attention (https://t.co/qWp2AJVdkd) by Yi Tay et al.— hardmaru (@hardmaru) April 8, 2020
A GitHub Colab reimplementation in PyTorch (https://t.co/B5FcGuTZhy) also combined it with ideas from Reformer. https://t.co/WSwZuSRyPb pic.twitter.com/54fJrRbhEA