An action that automatically compresses AND optimizes images in PRs? Yes, please @calibreapp! https://t.co/tKZ91QcaNP pic.twitter.com/XLCl0a1aIE
— GitHub (@github) November 4, 2021
An action that automatically compresses AND optimizes images in PRs? Yes, please @calibreapp! https://t.co/tKZ91QcaNP pic.twitter.com/XLCl0a1aIE
— GitHub (@github) November 4, 2021
Oh, rad: did you know that you can use functions as values in a Python dictionary? 🐍
— 👩💻 Paige Bailey #BlackLivesMatter (@DynamicWebPaige) November 4, 2021
You can define the lambda function as a reference to the keys of the dictionary, and then you can pass arguments to it and evaluate it.
😘 Unsolicited {dict} pic: pic.twitter.com/CwiPKfNuIU
VLMO: Unified Vision-Language Pre-Training with
— AK (@ak92501) November 4, 2021
Mixture-of-Modality-Experts
abs: https://t.co/Rv9o8aFIdI
introduce Mixture-of-Modality-Experts Transformer,
where each block contains a pool of modality-specific experts and a shared self attention layer pic.twitter.com/4k0YFlvgsR
StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN
— AK (@ak92501) November 3, 2021
abs: https://t.co/TGdEjthZlk
github: https://t.co/7q09qdMTdf pic.twitter.com/sGcg5imUAM
Can Vision Transformers Perform Convolution?
— AK (@ak92501) November 3, 2021
abs: https://t.co/rsHhON89sV
a single ViT layer with image patches as the input can perform any convolution operation constructively, where the multi-head attention mechanism and the relative positional encoding play essential roles pic.twitter.com/Qw1RqqEfjV
This blog will examine why distributed training is important and how you can use PyTorch Lightning with Ray to enable multi-node training and automatic cluster configuration with minimal code changes. Read more below:https://t.co/xnpj3A98sv
— PyTorch (@PyTorch) November 2, 2021
One of the most startling Covid charts I’ve made in a long time, on vaccine inequality:
— John Burn-Murdoch (@jburnmurdoch) November 1, 2021
Rich countries have given out more booster shots in the last 3 months, than poor countries have given out total doses all year.@donatopmancini’s story: https://t.co/rW7nsCQk4a pic.twitter.com/LVXpH9jMId
The law of working on machine learning projects:
— Radek Osmulski (@radekosmulski) November 1, 2021
✅ you are unable to tell if a problem can be solved until you build a baseline
✅ any time estimates you make before building a baseline are fortune-telling
"When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute" by Tao Lei @taolei15949106 - Outstanding Paper at EMNLP https://t.co/7IR25d9Sz2
— Sasha Rush (@srush_nlp) October 30, 2021
(Tao's work is always must read. Combines algorithmic cleverness with practical engineering and experiments.)
We've trained a system to answer grade-school math problems with double the accuracy of a fine-tuned GPT-3 model.
— OpenAI (@OpenAI) October 29, 2021
Multistep reasoning is difficult for today's language models. We present a new technique to help. https://t.co/JRXUYZOSg7
FX-based feature extraction is a new TorchVision utility that allows access to intermediate transformations of an input during the forward pass of a PyTorch Module.
— PyTorch (@PyTorch) October 29, 2021
Read more below on what makes TorchVision's utility more versatile than existing methods.https://t.co/RRsowlXPwi
Just checking out Hummingbird again for a current project (https://t.co/cmDYE1pbER). It combines my two favorite libraries (scikit-learn & PyTorch) and lets you port over existing models (and leverage GPUs) without having to retrain. Amazing stuff! pic.twitter.com/mwl4Pe0Pmn
— Sebastian Raschka (@rasbt) October 29, 2021