Tweeted By @evolvingstuff
Probing Representations Learned by Multimodal Recurrent and Transformer Models
— Thomas Lahore (@evolvingstuff) August 30, 2019
"while Transformer models achieve superior machine translation quality, representations from the RNNs perform significantly better over tasks focused on semantic relevance"https://t.co/SxSJDDsP7t