Tweeted By @srchvrs
"We present gen. Parameter-Efficient Finetuning framework for tuning LLM with only 𝟬.𝟭%-𝟬.𝟮% of parameters using mix of adaptation modules -> achieve new 𝗦𝗢𝗧𝗔 > standard tuning on both NLU & NLG tasks.
— Leo Boytsov (@srchvrs) December 16, 2022
Paper: https://t.co/yE5YeSBu8m
Code&Models: https://t.co/rT6Q0vES45