A clean TensorFlow 2 implementation of StyleGAN 2 (with pretrained weights for generating landscapes) https://t.co/CT7RknBsQK pic.twitter.com/MuUQZ2mlfw
— François Chollet (@fchollet) December 16, 2019
A clean TensorFlow 2 implementation of StyleGAN 2 (with pretrained weights for generating landscapes) https://t.co/CT7RknBsQK pic.twitter.com/MuUQZ2mlfw
— François Chollet (@fchollet) December 16, 2019
church of StyleGAN pic.twitter.com/yWbir1AoMG
— Gene Kogan (@genekogan) December 13, 2019
Very cyberpunk: "We instead ... perform pixel-level image translation via CycleGAN to convert the [video of a human demonstrating a task] into a video of a robot, which can then be used to construct a reward function for a model-based RL algorithm." https://t.co/NcoMeFeNkZ
— Miles Brundage (@Miles_Brundage) December 11, 2019
Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models
— ML Review (@ml_review) November 28, 2019
By @giannis_daras @gstsdn @Han_Zhang_ @AlexGDimakis
New sparse attention layer improves SAGAN FID score on ImageNet from 18.65 to 15.94https://t.co/W48RlTJhi9https://t.co/GrIwiNPVhH pic.twitter.com/eTn0QvWTNX
I Reimplemented StyleGAN using TensorFlow 2.0 - Including a Web Demo! https://t.co/fFLRIv9D8u
— /MachineLearning (@slashML) November 26, 2019
Prescribed Generative Adversarial Networks
— hardmaru 😷 (@hardmaru) November 1, 2019
Adding noise to the generator's output can help prevent mode collapse common in GANs, and also allows approximate log-likelihood evaluation. It's like killing two birds with one stone!
By @adjiboussodieng et al. https://t.co/q0bgXh3nTW pic.twitter.com/LIQwD9Hi99
SinGAN - Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image" https://t.co/O96ymf4N6o
— Python Trending (@pythontrending) October 31, 2019
GANs are powerful generative models but they suffer from mode collapse and are hard to evaluate on test data. We developed PresGANs to address these two limitations: https://t.co/dwVM7bDsiD pic.twitter.com/bJJQInCqjH
— DeepMind (@DeepMindAI) October 31, 2019
Introduction to Adversarial Machine Learning
— hardmaru 😷 (@hardmaru) October 29, 2019
A tutorial by @amarunava that presents an overview of current approaches for adversarial attacks and defenses in the literature.https://t.co/8cq3O6jRrG pic.twitter.com/Ek5gfxkyh0
We’ve released pre-trained BigBiGAN representation learning models https://t.co/Rhm94rOuX5
— DeepMind (@DeepMindAI) October 8, 2019
on TF Hub: https://t.co/E18skH2iRC
Try them out in a Colab at: https://t.co/ixQZJaABRJ pic.twitter.com/Hu7vPpLkgL
FaceForensics++: Learning to Detect Manipulated Facial Images
— hardmaru 😷 (@hardmaru) October 7, 2019
They propose benchmark for DeepFake detection is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as representative methods for face manipulation and release a large labeled dataset.https://t.co/L8rSBnBNx9 https://t.co/0mw5CEIIqY pic.twitter.com/M7y8LB8JTU
Facebook AI researchers have created Fashion++, an #AI tool that learns from sample images and then recommends easy changes to a person’s outfit to make it more stylish. https://t.co/tQB2xEe2qw pic.twitter.com/OSSENahTog
— Facebook AI (@facebookai) October 2, 2019