FlowTron: Improved Text to Speech Engine from NVIDIA
— PyTorch (@PyTorch) May 14, 2020
Try it out now!
Paper: https://t.co/1vMnpQSBbi
Code: https://t.co/O0sYbd2LTX https://t.co/B0I74FmxIY
FlowTron: Improved Text to Speech Engine from NVIDIA
— PyTorch (@PyTorch) May 14, 2020
Try it out now!
Paper: https://t.co/1vMnpQSBbi
Code: https://t.co/O0sYbd2LTX https://t.co/B0I74FmxIY
I'm releasing two brand new packages next week - dataframe_image and jupyter_to_medium
— Ted Petrou (@TedPetrou) May 9, 2020
dataframe_image embeds pandas DataFrames as images in pdf and markdown documents
jupyter_to_medium - publishes jupyter notebooks as Medium blog postshttps://t.co/tfuVj6O4nB
Really nice update.
— Soumith Chintala (@soumithchintala) May 7, 2020
With a semi-automated ML pipeline that extracts results and tables from papers, they're able to scale much faster.
They released the code and paper to extract results from papers, at: https://t.co/l7caAau4RK and https://t.co/XKHCTwjadK https://t.co/pnvPKmwCqY
CNN Explainer is an interactive visualization tool for learning purposes. It runs a pre-tained CNN in the browser and lets you explore the layers and operations: https://t.co/Zi7lieHeIM
— Denny Britz (@dennybritz) May 1, 2020
Video: https://t.co/JqZvoUbojZ
Code: https://t.co/TxsNAuj7hA
Paper: https://t.co/6RhexIv52U pic.twitter.com/KKmnpCsIxt
Much of the world’s information is stored in table form. TAPAS, a new model based on the #BERT architecture, is optimized for parsing tabular data over a wide range of structures and domains for application to question-answering tasks. Learn more below: https://t.co/U9zRxaUvik
— Google AI (@GoogleAI) April 30, 2020
Minimal PyTorch implementation of YOLOv4 https://t.co/PvXlrfLBcH #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) April 30, 2020
The Once-For-All (OFA) network from Han Cai, @SongHan_MIT et. al.
— PyTorch (@PyTorch) April 29, 2020
Train one flexible network, deploy subsets that specialize to Mobile, Cloud, IOT efficiently, without retraining!
Code: https://t.co/doCgCUzP43
Paper: https://t.co/E7tQafPGci
News: https://t.co/hQk7FTzQLU pic.twitter.com/MegrXNwBUA
Seems like the new ResNet variant as a backbone for object detection works pretty well! That naming though ... looks like someone types "ResNets" a bit too fast. "ResNeSt: Split-Attention Network": https://t.co/fZBHr9odJr pic.twitter.com/kWvQ42Xntc
— Sebastian Raschka (@rasbt) April 25, 2020
Just pushed the code of a chrome extension that turns every Instagram posts into 3d images using #3DPhotoInpainting. No GPU needed thanks to @GoogleColab but a bit of patience to set it up ;-)
— Cyril Diagne (@cyrildiagne) April 19, 2020
Demo: @parrstudio's amazing work
Code: https://t.co/59yJUvRHxE#AIUX #Interaction #ML pic.twitter.com/86mMBWdm7V
Check out EfficientDet: a new class of state-of-the-art object detectors that provide an order of magnitude better efficiency, released along with all code and pre-trained models. Learn more below! https://t.co/rLk3YbME9o
— Google AI (@GoogleAI) April 15, 2020
STEFANN: Scene Text Editor using Font Adaptive Neural Network
— roadrunner01 (@ak92501) April 15, 2020
pdf: https://t.co/WIkFfsL0i2
abs: https://t.co/sHIYHm8PMb
project page: https://t.co/cCCOKSKDrc
github: https://t.co/dkOYTNOjgE pic.twitter.com/A65TXpfC0C
Another Transformer variant with lower computational complexity, suitable for long-range tasks, is Sparse Sinkhorn Attention (https://t.co/qWp2AJVdkd) by Yi Tay et al.
— hardmaru (@hardmaru) April 8, 2020
A GitHub Colab reimplementation in PyTorch (https://t.co/B5FcGuTZhy) also combined it with ideas from Reformer. https://t.co/WSwZuSRyPb pic.twitter.com/54fJrRbhEA