GANs continue to break my brain https://t.co/FD9yna3ho3
— Andrej Karpathy (@karpathy) January 17, 2019
GANs continue to break my brain https://t.co/FD9yna3ho3
— Andrej Karpathy (@karpathy) January 17, 2019
Another great property of style-based generators is that they've finally succeeded in learning multiple levels of abstraction. Different parts of latent vector control high-level meaning vs fine-grained detail https://t.co/CVkF2ZvXVY
— Ian Goodfellow (@goodfellow_ian) January 16, 2019
4.5 years of GAN progress on face generation. https://t.co/kiQkuYULMC https://t.co/S4aBsU536b https://t.co/8di6K6BxVC https://t.co/UEFhewds2M https://t.co/s6hKQz9gLz pic.twitter.com/F9Dkcfrq8l
— Ian Goodfellow (@goodfellow_ian) January 15, 2019
Super-resolution GANs for improving the texture resolution of old games: https://t.co/urDjC6eWSc
— Ian Goodfellow (@goodfellow_ian) January 9, 2019
A Style-Based Generator Architecture for Generative Adversarial Networks. @NvidiaAI does it again! https://t.co/W2Av34ta9O pic.twitter.com/wkDJgt8v3q
— hardmaru (@hardmaru) December 13, 2018
These style-based generator results look great: https://t.co/RL825n0yNP pic.twitter.com/k7UtJMTWhM
— Ian Goodfellow (@goodfellow_ian) December 13, 2018
I tried it on a YouTube video clip called “We followed a Waymo self-driving car for miles, here's what we saw” and here's what it generated: https://t.co/mX5AoEz248 pic.twitter.com/bR1043OSmr
— hardmaru (@hardmaru) December 12, 2018
Watching a GAN training in @fastdotai v1. pic.twitter.com/RkkcWcFwYE
— Sylvain Gugger (@GuggerSylvain) December 11, 2018
IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis. They propose a clever way to combine VAE+GAN by reusing the encoder of the VAE as the discriminator, allowing VAE to generate hi-res images. Found this gem at #NeurIPS2018. https://t.co/WQl4DdLyyk pic.twitter.com/hrOpPsJOwT
— hardmaru (@hardmaru) December 4, 2018
Their approach can even find groups of neurons that correspond to GAN artifacts, or “mistakes” that GANs make, and remove them. Although I thought adding more artifacts might be more fun than removing them! pic.twitter.com/UeQTgCcEh1
— hardmaru (@hardmaru) November 29, 2018
BigGanEx Notebook
— Zaid Alyafeai (زيد اليافعي ) (@zaidalyafeai) November 27, 2018
An introduction to BigGans, latent space and the truncation trick. Moreover, some cool experiments are introduced. Thanks to @ajmooch for helping with some concepts. https://t.co/iA0yeRDwxD
Hypothesis: whatever your aesthetic, it's somewhere in the multidimensional space of #BigGAN. Challenge: Use @_joelsimon's https://t.co/SdupSc3x77 to find yours. #GANpunk pic.twitter.com/aFJtw8P0dy
— Janelle Shane (@JanelleCShane) November 26, 2018