Text-Free Learning of a Natural Language Interface for Pretrained Face Generators
— AK (@_akhaliq) September 9, 2022
abs: https://t.co/MdUAsWgJ8e pic.twitter.com/NnH4V5D9Qf
Text-Free Learning of a Natural Language Interface for Pretrained Face Generators
— AK (@_akhaliq) September 9, 2022
abs: https://t.co/MdUAsWgJ8e pic.twitter.com/NnH4V5D9Qf
On the Effectiveness of Compact Biomedical Transformers
— AK (@_akhaliq) September 8, 2022
abs: https://t.co/iwnSK5HSL7
huggingface models: https://t.co/vjdmrnfhgl
github: https://t.co/yerzCEukV8 pic.twitter.com/9h6c6RbdBq
Stable Diffusion web UI with Outpainting, Inpainting, Prompt matrix, Upscale, Textual Inversion and many more features
— AK (@_akhaliq) September 5, 2022
github: https://t.co/F1T1qWMojs pic.twitter.com/hmjU4j1bic
Petals: Collaborative Inference and Fine-tuning of Large Models
— AK (@_akhaliq) September 5, 2022
abs: https://t.co/hE1Yx0P4iI
project page: https://t.co/Kgz6P4jZZx
github: https://t.co/dqcT4Ue3hh pic.twitter.com/5E1o2nTXsq
Faithful Reasoning Using Large Language Models
— AK (@_akhaliq) August 31, 2022
abs: https://t.co/PKQDgzCdF3 pic.twitter.com/Mia8kRS8iG
There are many additional methods for concept drift detection. To highlight a few next to the statistical tests mentioned yesterday:
— Sebastian Raschka (@rasbt) August 30, 2022
- @zacharylipton et al's black box shift estimation: https://t.co/tLCgIJW8mg
- @tdietterich's idea to use an old-vs-new classifier
[1/3] https://t.co/ZMT7bgm6Gr
Open-Set Semi-Supervised Object Detection
— AK (@_akhaliq) August 30, 2022
abs: https://t.co/mW9vWZ6uEX
project page: https://t.co/rplP961v7U pic.twitter.com/g93CV7CW06
stable_diffusion.openvino: Implementation of Text-To-Image generation using Stable Diffusion on Intel CPU by @bes_dev
— AK (@_akhaliq) August 28, 2022
github: https://t.co/aDLQKtK4hW pic.twitter.com/4JxJvP6gEJ
Stable Diffusion Tutorial: GUI, Better Results, Easy Setup, text2image and image2image
— AK (@_akhaliq) August 27, 2022
video: https://t.co/AkBEJfvtrw
github: https://t.co/X5APJg4QAV pic.twitter.com/hGacGDufaY
PEER: A Collaborative Language Model
— AK (@_akhaliq) August 25, 2022
abs: https://t.co/lEzymkMSph pic.twitter.com/cq28sB1zrz
Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
— AK (@_akhaliq) August 23, 2022
abs: https://t.co/Xaspq4bZRP
model is parameter-efficient in that it outperforms the 600x larger PaLM540B on XSum, and the finetuned 200x larger GPT3175B on SAMSum pic.twitter.com/h3ZyLAMRLQ
We release LLM.int8(), the first 8-bit inference method that saves 2x memory and does not degrade performance for 175B models by exploiting emergent properties. Read More:
— Tim Dettmers (@Tim_Dettmers) August 17, 2022
Paper: https://t.co/eNpinXS0Z5
Software: https://t.co/hBuVyQhLqS
Emergence: https://t.co/oPGRhACNEe pic.twitter.com/vNWxrDHlOh