GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
— AK (@_akhaliq) December 23, 2022
abs: https://t.co/7jEmtNaRzT pic.twitter.com/bhCrtjJYFH
GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
— AK (@_akhaliq) December 23, 2022
abs: https://t.co/7jEmtNaRzT pic.twitter.com/bhCrtjJYFH
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM https://t.co/hpKlpbDiLE #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #PyTorch
— PyTorch Best Practices (@PyTorchPractice) December 22, 2022
X-Decoder: Generalized Decoding for Pixel, Image and Language
— AK (@_akhaliq) December 22, 2022
Hugging Face demo: https://t.co/TanVA1vBow
abs: https://t.co/3AtipzRWJT
project page: https://t.co/aqVdA3akGY
github: https://t.co/GNWGTBKd65 pic.twitter.com/0o03447dqy
Optimizing Prompts for Text-to-Image Generation
— AK (@_akhaliq) December 20, 2022
abs: https://t.co/RIagcSFRgx
github: https://t.co/aqvAKuyryJ
Hugging Face: https://t.co/L0tuCre89O pic.twitter.com/lNrKKhiUBN
Pretty eery: AI models learn to reflect user views back at them (since I figure getting low loss rewards monitoring the _context_ of whatever emitted the input tokens). Pretty weird to see it in the wild. LLMs seek to reflect the views of people that talk to them. https://t.co/gD5qbAwRUb
— Jack Clark (@jackclarkSF) December 19, 2022
Teaching Small Language Models to Reason
— AK (@_akhaliq) December 19, 2022
abs: https://t.co/FVIko6wVez pic.twitter.com/bhU8maOlVk
ALERT: Adapting Language Models to Reasoning Tasks
— AK (@_akhaliq) December 19, 2022
abs: https://t.co/fC9hNJpKvc pic.twitter.com/TSbYLZYO2n
NeRF-Art: Text-Driven Neural Radiance Fields Stylization
— AK (@_akhaliq) December 16, 2022
abs: https://t.co/VVU1EpjrF5
project page: https://t.co/6N4wAxEP4F
github: https://t.co/YyCouPYBdy pic.twitter.com/ErGZgVnPqc
"We present gen. Parameter-Efficient Finetuning framework for tuning LLM with only 𝟬.𝟭%-𝟬.𝟮% of parameters using mix of adaptation modules -> achieve new 𝗦𝗢𝗧𝗔 > standard tuning on both NLU & NLG tasks.
— Leo Boytsov (@srchvrs) December 16, 2022
Paper: https://t.co/yE5YeSBu8m
Code&Models: https://t.co/rT6Q0vES45
Image-and-Language Understanding from Pixels Only
— AK (@_akhaliq) December 16, 2022
abs: https://t.co/E9fOot76FZ pic.twitter.com/H3x6GMMXqh
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory
— AK (@_akhaliq) December 13, 2022
abs: https://t.co/8e5JXjSZlS pic.twitter.com/Wp0HKjNOCg
CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet
— AK (@_akhaliq) December 13, 2022
abs: https://t.co/T78PJg7rHK pic.twitter.com/SBL4DksY9z