Tweeted By @yoavgo
Also, many people seem to hold both of the following beliefs at the same time:
— (((ل()(ل() 'yoav)))) (@yoavgo) August 8, 2018
- ha cool we can do language models with feed-forward nets instead of RNNs!
- if we do LM well we will model all of language and achieve AGI!
It doesn't work this way. These are conflicting.