Tweeted By @stanfordnlp
At #acl2020nlp, @mhahn29 presents TACL paper Theoretical Limitations of Self-Attention in Neural Sequence Models: Transformer models seem all-powerful in #NLProc but they can’t even handle all regular languages—what does this say about human language? https://t.co/mbE4kmZqhX pic.twitter.com/Jr8TjBhspq
— Stanford NLP Group (@stanfordnlp) July 4, 2020