stable_diffusion.openvino: Implementation of Text-To-Image generation using Stable Diffusion on Intel CPU by @bes_dev
— AK (@_akhaliq) August 28, 2022
github: https://t.co/aDLQKtK4hW pic.twitter.com/4JxJvP6gEJ
stable_diffusion.openvino: Implementation of Text-To-Image generation using Stable Diffusion on Intel CPU by @bes_dev
— AK (@_akhaliq) August 28, 2022
github: https://t.co/aDLQKtK4hW pic.twitter.com/4JxJvP6gEJ
Stable Diffusion Tutorial: GUI, Better Results, Easy Setup, text2image and image2image
— AK (@_akhaliq) August 27, 2022
video: https://t.co/AkBEJfvtrw
github: https://t.co/X5APJg4QAV pic.twitter.com/hGacGDufaY
Stable Diffusion web UI
— AK (@_akhaliq) August 26, 2022
A browser interface based on @Gradio library for Stable Diffusion
github: https://t.co/y4KGrprEKd pic.twitter.com/HzRVx9chYj
Forecasting Future World Events with Neural Networks
— AK (@_akhaliq) July 1, 2022
abs: https://t.co/tD8F0ZC1rC
github: https://t.co/v8HZgye0ZH
a dataset for measuring the ability of neural networks to forecast future world events, containing thousands of forecasting questions and an accompanying news corpus pic.twitter.com/xsQnxfgdia
Wanted to highlight an optimizer, by Xi-Lin Li, which I believe is the most promising way to get second-order methods into ML, but I think didn't get much attention because it came from a lone signal-processing researcher rather than an ML lab. https://t.co/1ASVzhZ2DK
— Yaroslav Bulatov (@yaroslavvb) June 29, 2022
(1/5)
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch https://t.co/fAhIdq5p6v #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #pytorch
— PyTorch Best Practices (@PyTorchPractice) June 25, 2022
Huge contribution to the research community by releasing the 20B parameter checkpoint by @m__dehghani and @YiTayML from @GoogleAI ❤️
— Hugging Face (@huggingface) June 23, 2022
Also a big thank you to @DanielHesslow for contributing the model to Transformers🤗 https://t.co/KC7oi4eHSP
Global Context Vision Transformers
— AK (@_akhaliq) June 22, 2022
abs: https://t.co/d6go0yv7fu
github: https://t.co/rUYFs09ReC
On ImageNet-1K dataset for classification, the base, small and tiny variants of GC ViT with 28M, 51M and 90M parameters achieve 83.2%, 83.9% and 84.4% Top-1 accuracy, respectively pic.twitter.com/XKoJAvUcYm
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
— AK (@ak92501) May 27, 2022
abs: https://t.co/VHJYoC4ewJ
project page: https://t.co/goVCa1VXbi
github: https://t.co/AXYYJr2vcd pic.twitter.com/d1gQwwgVMw
Our 50+-year review of #forecast combinations is now out. @YanfeiKang @f3ngli @Xia0qianWang https://t.co/MTBhI2trNe
— Rob J Hyndman (@robjhyndman) May 10, 2022
.@Gradio Demo for CaptchaCracker, an open source Python library that provides functions to create and apply deep learning models for Captcha Image recognition on @huggingface Spaces
— AK (@ak92501) May 9, 2022
demo: https://t.co/jXbFUsSgqx
github: https://t.co/tEFCEB43uM pic.twitter.com/MsUisP4wBd
The Annotated Transformer [v2022]
— Sasha Rush (@srush_nlp) May 2, 2022
A community refresh of the original blog with modern PyTorch and data-science tools. https://t.co/6jydMTO8if
(Led by @austinvhuang, @subramen, @JonathanSumDL, @eKhalid_, @BlancheMinerva ) pic.twitter.com/mMhcdw5Mbu