DiffusER: Discrete Diffusion via Edit-based Reconstruction
— AK (@_akhaliq) November 1, 2022
abs: https://t.co/8bU1Ay4Rai pic.twitter.com/fDVeyIyFa7
DiffusER: Discrete Diffusion via Edit-based Reconstruction
— AK (@_akhaliq) November 1, 2022
abs: https://t.co/8bU1Ay4Rai pic.twitter.com/fDVeyIyFa7
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
— AK (@_akhaliq) November 1, 2022
abs: https://t.co/rm5oDBixvP pic.twitter.com/02wA5c7hWX
A lot has been speculated about TikTok's recommendations. This is the first paper I've read by the team, and it has many interesting details: expirable embeddings, parameter server, online training... Good #recsys stuff https://t.co/r8UBCrnZi8
— Xavier Amatriain (@xamat) October 31, 2022
MagicMix: Semantic Mixing with Diffusion Models
— AK (@_akhaliq) October 31, 2022
abs: https://t.co/ad6ntMsw5Q
project page: https://t.co/HdP673D8qT pic.twitter.com/CwnmcEmisb
ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts
— AK (@_akhaliq) October 28, 2022
abs: https://t.co/TYNxPv2K5F
achieves a zero-shot FID-30k score of 6.75 on the MS-COCO dataset pic.twitter.com/VRUeIJRRVR
DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models
— AK (@_akhaliq) October 27, 2022
abs: https://t.co/nuhVrnsDw9
github: https://t.co/RTp4nNDYVR pic.twitter.com/OzzsrYNypo
Lafite2: Few-shot Text-to-Image Generation
— AK (@_akhaliq) October 26, 2022
abs: https://t.co/WBV6jmciIl pic.twitter.com/fdpejYhPet
Is it possible to transform sensitive user data in a way that 𝘨𝘶𝘢𝘳𝘢𝘯𝘵𝘦𝘦𝘴 fairness while still being useful for (unknown) downstream applications?
— DeepMind (@DeepMind) October 25, 2022
Introducing LASSI: a new provably fair representation learning method for high-dimensional data: https://t.co/ww1I6qd4jA pic.twitter.com/D4VpJLY1qL
Towards Real-Time Text2Video via CLIP-Guided, Pixel-Level Optimization
— AK (@_akhaliq) October 25, 2022
abs: https://t.co/qeHhRHxf4k
project page: https://t.co/ucoHXyVPAt pic.twitter.com/fYkMKXN5LY
Diffuser: Efficient Transformers with Multi-hop Attention Diffusion for Long Sequences
— AK (@_akhaliq) October 24, 2022
abs: https://t.co/e0ZSrPRoeH pic.twitter.com/V9STZ7nmQU
It's always exciting to add a new method to the deep tabular list: https://t.co/pWOAwqiTS0.
— Sebastian Raschka 📚 (@rasbt) October 23, 2022
Just read through the paper. It's an intriguing fresh take on deep learning for tabular data, combining approximate Bayesian inference and transformer tokenization. [1/6] https://t.co/iJeYGFC9dG
Wow, below is the 2nd paper shared today on deep tabular methods!
— Sebastian Raschka 📚 (@rasbt) October 21, 2022
The deep learning from tabular data research field is on fire today 🔥!
(PS: And don't forget about diffusion models for tabular data: https://t.co/CaEWyMyqnK 😁)
(PPS: My reviews will follow soon 😊) https://t.co/xupH4Znh3e