XDoc: Unified Pre-training for Cross-Format Document Understanding
— AK (@_akhaliq) October 7, 2022
abs: https://t.co/bHZuRhbzDP pic.twitter.com/CQudsSPM4e
XDoc: Unified Pre-training for Cross-Format Document Understanding
— AK (@_akhaliq) October 7, 2022
abs: https://t.co/bHZuRhbzDP pic.twitter.com/CQudsSPM4e
Decomposed Prompting: A Modular Approach for Solving Complex Tasks
— AK (@_akhaliq) October 6, 2022
abs: https://t.co/yhdw7lTUmr pic.twitter.com/icyxZt9k21
Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurement
— AK (@_akhaliq) October 6, 2022
abs: https://t.co/zPhPvQm045 pic.twitter.com/MJRKsXSFzV
Ask Me Anything: A simple strategy for prompting language models
— AK (@_akhaliq) October 6, 2022
abs: https://t.co/FxJBE7cLG1
github: https://t.co/GH31qYBcY9 pic.twitter.com/unUnTITbDE
New blog post: Collective Intelligence for Deep Learning
— hardmaru (@hardmaru) October 4, 2022
Recently, @yujin_tang and I published a paper about how ideas like swarm behavior, self-organization, emergence are gaining traction in deep learning.
I wrote a blog post summarizing the key ideas:https://t.co/S644KjM20e pic.twitter.com/1TYmuMPbKA
UniCLIP: Unified Framework for Contrastive Language–Image Pre-training
— AK (@_akhaliq) September 28, 2022
abs: https://t.co/7s5k4jYDIL pic.twitter.com/Ky3g54UFhj
Promptagator: Few-shot Dense Retrieval From 8 Examples
— AK (@_akhaliq) September 26, 2022
abs: https://t.co/YSYlD5IQHU
LLM prompting with no more than 8 examples allows dual encoders to outperform
heavily engineered models trained on MS MARCO like ColBERT v2 by more than 1.2 nDCG on average on 11 retrieval sets pic.twitter.com/X4sqN9RHtY
After some more experiments with OpenAI's whisper & I continue being impressed! 👌
— Sebastian Raschka (@rasbt) September 25, 2022
So impressed that I let my GPUs do some crunching over the wknd.
Tada, all subtitles (closed-captions) of the 170 vids in my DL course are whisper-based now 🚀 https://t.co/8FhMfL6v3N
Script 👇
MEGA is a new method for modeling long sequences based on the surprisingly simple technique of taking the moving average of embeddings.
— Graham Neubig (@gneubig) September 23, 2022
Excellent results, outperforming strong competitors such as S4 on most tasks! Strongly recommend that you check it out: https://t.co/Y07MSd0hhc https://t.co/9xbmaF5XCr pic.twitter.com/pX01RctxDN
Mega: Moving Average Equipped Gated Attention
— AK (@_akhaliq) September 23, 2022
abs: https://t.co/HxcNRQFg16 pic.twitter.com/gFuvKJOYvy
VToonify: Controllable High-Resolution Portrait Video Style Transfer
— AK (@_akhaliq) September 23, 2022
abs: https://t.co/ooiRHTRVf3
project page: https://t.co/juAGeGxoFK
github: https://t.co/JvFUrT3uDZ pic.twitter.com/IQx2b6DXfd
Reading through OpenAI Whisper paper https://t.co/3PmWvQNCFs some notes: pic.twitter.com/QVeqaGVvsV
— Andrej Karpathy (@karpathy) September 22, 2022