Tweeted By @hardmaru
I've received so much criticism for not incorporating reward info into world model / representations used by RL agents
— hardmaru (@hardmaru) September 18, 2020
But the way I see it, rewards are so overvalued…
See this new paper, “Decoupling Representation Learning from Reinforcement Learning”https://t.co/ADKKpOJLGI https://t.co/APTEJRMmOs