Tweeted By @OpenAI
Releasing the Sparse Transformer, a network which sets records at predicting what comes next in a sequence — whether text, images, or sound. Improvements to neural 'attention' let it extract patterns from sequences 30x longer than possible previously: https://t.co/FZlDEPsi1A pic.twitter.com/1cn1PO2nJX
— OpenAI (@OpenAI) April 23, 2019