Tweeted By @_akhaliq
Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
— AK (@_akhaliq) August 23, 2022
abs: https://t.co/Xaspq4bZRP
model is parameter-efficient in that it outperforms the 600x larger PaLM540B on XSum, and the finetuned 200x larger GPT3175B on SAMSum pic.twitter.com/h3ZyLAMRLQ