Imagic: Text-Based Real Image Editing with Diffusion Models
— AK (@_akhaliq) October 18, 2022
abs: https://t.co/xRW6F6w2ZG pic.twitter.com/wGifY74i4w
Imagic: Text-Based Real Image Editing with Diffusion Models
— AK (@_akhaliq) October 18, 2022
abs: https://t.co/xRW6F6w2ZG pic.twitter.com/wGifY74i4w
The @Gradio Demo for Stable-Dreamfusion: A pytorch implementation of text-to-3D dreamfusion, powered by stable diffusion is out
— AK (@_akhaliq) October 17, 2022
colab: https://t.co/2Xhl7nXdwr
github: https://t.co/uFDznjXMRn pic.twitter.com/AXKsm4khna
Recently some complain about prompting as an approach to NLP.
— Graham Neubig (@gneubig) October 17, 2022
"It's so brittle." "Prompt engineering is hacky." etc.
But there's another way to view it: prompt engineering is another way of tuning the model's parameters, and human interpretable! See https://t.co/1Ya7BHoArR
1/2 pic.twitter.com/Yn3G28CLY9
Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning
— AK (@_akhaliq) October 17, 2022
abs: https://t.co/tL0kvRqEpn pic.twitter.com/pMyAyPFUNk
This #StableDiffusion add-on for Blender looks amazing. @AI_Render renders an AI-generated image based on a text prompt and your scene in Blender.https://t.co/v3IvXBkcpi pic.twitter.com/YTVqCFvekf
— hardmaru (@hardmaru) October 16, 2022
Prompt-to-Prompt: Latent Diffusion and Stable Diffusion implementation with @huggingface diffusers is out
— AK (@_akhaliq) October 16, 2022
github: https://t.co/B4YcBt7vgo pic.twitter.com/QoIsax3xB1
LAION-5B has 5 billion natural captions, which provide a lot of information, but could synthetic captions complement them?@LAION_AI released LAION-COCO, a dataset of 600M high-quality generated captions for publicly available images, in MS COCO’s format.https://t.co/YxqFBol8Ug
— hardmaru (@hardmaru) October 15, 2022
Early in my career, I thought the path to success was to be super smart. Pretty soon I realized that the majority of my peers were smarter than me, so that plan went out the window. What saved me was a realization that I'll call the Liam Neeson principle. pic.twitter.com/bAShVreD6j
— Arvind Narayanan @randomwalker@mastodon.social (@random_walker) October 14, 2022
Mass Editing Memory in a Transformer
— AK (@_akhaliq) October 14, 2022
abs: https://t.co/AYwoYmmmaw
project page: https://t.co/Xsnrh3EtJ8
github: https://t.co/ufsbkROY2d pic.twitter.com/ZNJyW1xOn4
Scalable Neural Video Representations with Learnable Positional Features
— AK (@_akhaliq) October 14, 2022
abs: https://t.co/r8ofrXivQ0
project page: https://t.co/nHTBO27iOG pic.twitter.com/4I7YRnIzGT
Cloud TPUs are powerful hardware for building large models & have enabled many research successes. We enable scaling up PyTorch models to 10B+ parameters on Cloud TPU with a new Fully Sharded Data Parallel (FSDP) interface in PyTorch/XLA.
— PyTorch (@PyTorch) October 13, 2022
It's a joke that all NLP talks must include this graph.
— Sasha Rush (@srush_nlp) October 13, 2022
But if you are a student it is a bit intimidating. How can you become an expert in where we are going if you can barely run BERT?
I asked twitter for specific advice that you might focus on: pic.twitter.com/JUpevq34pE