Eyes Tell All: Irregular Pupil Shapes Reveal GAN-generated Faces
— AK (@ak92501) September 2, 2021
pdf: https://t.co/lgcfsZO7dp
abs: https://t.co/ZV5TJyVAp4 pic.twitter.com/9BYyrVpxAC
Eyes Tell All: Irregular Pupil Shapes Reveal GAN-generated Faces
— AK (@ak92501) September 2, 2021
pdf: https://t.co/lgcfsZO7dp
abs: https://t.co/ZV5TJyVAp4 pic.twitter.com/9BYyrVpxAC
∞-former: Infinite Memory Transformer
— AK (@ak92501) September 2, 2021
pdf: https://t.co/4B4sxwGEM5
abs: https://t.co/sBtFRIv7rc
propose the ∞-former, which extends the vanilla transformer with an unbounded long-term memory pic.twitter.com/0oFWpixF3b
Article in @petapixel about @GoogleAI's SR3 Image Super-resolution method.
— hardmaru (@hardmaru) September 1, 2021
Neural network-based photo upscaling will likely be commonplace in most smartphones in the near future.https://t.co/7TKE3jeTv2
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU
— AK (@ak92501) September 1, 2021
pdf: https://t.co/6z9WZacJyk
abs: https://t.co/wUX0MThFGc pic.twitter.com/vR6V68Zdgv
It's finally ready:
— Halvar Flake (@halvarflake) August 31, 2021
Prodfiler, a continuous profiler that "just works" -- for C/C++/Rust/Go/JVM/Python/Perl/PHP -- no code change required, no symbols on the machine required, no service restart required.
Check out: https://t.co/EL2DHpoLkl or the blog post below. https://t.co/3HRamFZyhd
AI is influencing the world and right now most of the actors that have power over AI are in the private sector. This is probably not optimal. Here's some research from @jesswhittles and I on how to change that. https://t.co/vljhYFXyGC
— Jack Clark (@jackclarkSF) August 31, 2021
SummerTime: Text Summarization Toolkit for Non-experts
— AK (@ak92501) August 31, 2021
pdf: https://t.co/RCgAVCFPLx
abs: https://t.co/xuGph3jsdB
github: https://t.co/OYTgNX6u0I pic.twitter.com/txTS9Lg0Cq
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
— AK (@ak92501) August 31, 2021
pdf: https://t.co/CHAhM6eO0B
abs: https://t.co/Hav0uyFLpH
can convert small language models into better few-shot learners without any prompt engineering pic.twitter.com/qlTIbLvolc
Hire-MLP: Vision MLP via Hierarchical Rearrangement
— AK (@ak92501) August 31, 2021
pdf: https://t.co/4Vjf9gQZuv
abs: https://t.co/9umSzoO621
achieves an 83.4% top-1 accuracy on ImageNet, which surpasses previous Transformer-based and MLP-based models with better trade-off for accuracy and throughput pic.twitter.com/Ftsgaaipva
#PyTorch re-implementation of DeepMind's Perceiver IO: A General Architecture for Structured Inputs & Outputs https://t.co/c16ftYKgzJ pic.twitter.com/srwT1TiOaU
— Alexandr Kalinin (@alxndrkalinin) August 30, 2021
Helping juniors the right amount can feel like tricky business. Let me throw out some ideas that I've found helpful. And if you have ideas, HMU. I think we all have lots to learn here. A 🧵(in no particular order of importance) ... https://t.co/w9Pt0wzt4o
— JD Long (@CMastication) August 30, 2021
What’s more, if your training is in @PyTorch, you can rather easily add this behaviour with minimal changes to your codebase, using @higherpytorch.https://t.co/U5dFLBXTHZ
— Edward Grefenstette 🇪🇺 (@egrefen) August 28, 2021