Tweeted By @pierre_guillou
To be noticed: the French Bidirectional LM trained with a #QRNN architecture and the #SentencePiece tokenizer got better performance than the one with #AWDLSTM architecture and the #spaCy tokenizer. All notebooks/models parameters/vocab on line at https://t.co/vQl8GO2Lbk
— Pierre Guillou (@pierre_guillou) September 20, 2019