Tweeted By @gneubig
New preprint "How Can We Know What Language Models Know?" https://t.co/i7bznE6EOu
— Graham Neubig (@gneubig) November 28, 2019
Recent work queries LMs for knowledge ("profession") w/ textual questions ("X's profession is Y"). We show you need the *right* Qs: with BERT, just changing how you ask raises accuracy 31% to 38%! pic.twitter.com/VcOwSNB2Ee