Tweeted By @rsalakhu
New work on Efficient Transformers in RL using Actor-Learner Distillation:
— Russ Salakhutdinov (@rsalakhu) April 10, 2021
Compressing online larger “Learner model” towards a tractable “Actor model” in distributed RL setting with partially-observable environments.https://t.co/jnExWiPabS
with E. Parisotto #ICLR2021 pic.twitter.com/kC5SSRsWrn