On Extractive and Abstractive Neural Document Summarization with Transformer Language Models 🧐https://t.co/cGqTaMn8II pic.twitter.com/4Mi9Vq5zWo
— hardmaru (@hardmaru) September 11, 2019
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models 🧐https://t.co/cGqTaMn8II pic.twitter.com/4Mi9Vq5zWo
— hardmaru (@hardmaru) September 11, 2019
Huawei Noah's Ark Lab launches "AI poet": writing high quality classical Chinese poetry with a generative pre-trained transformer (GPT) -- "first to employ GPT in developing a poetry generation system" https://t.co/3pMsPnPESf
— Jeffrey Ding (@jjding99) September 10, 2019
Stay tuned for more coverage in next week's #ChinAI
Check out our new diagnostic NLP benchmark, CLUTRR,
— Will Hamilton (@williamleif) September 10, 2019
which tests the logical reasoning and generalization ability of NLP systems: https://t.co/uF1cxZPiGK https://t.co/CHhNuHtBw5 Paper to appear in EMNLP 2019; project led by @koustuvsinha
Making synthesized speech sound as natural as possible is difficult, partly because evaluation is normally performed on a sentence-by-sentence basis. Check out new research that compares several ways of evaluating synthesized speech for multi-line texts. https://t.co/a0pcm3hj3V
— Google AI (@GoogleAI) September 9, 2019
Google open-sources datasets for AI assistants with human-level understanding - VentureBeat https://t.co/KqNWmiZa3P via @GoogleNews
— Bojan Tunguz (@tunguz) September 8, 2019
Excited to announce Quoref, a reading comprehension dataset with 24K span-selection questions requiring coreferential reasoning over English Wikipedia paragraphs. Work with @nelsonfliu, @anamarasovic, @nlpnoah and @nlpmattg. https://t.co/w2Igjq2IPB https://t.co/dNf4Fu2ENH #NLProc
— Pradeep Dasigi (@pdasigi) September 6, 2019
Google just released 2 new NLP dialog datasets collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user': https://t.co/cNLTaTQ4gQ
— Peter Skomoroch (@peteskomoroch) September 6, 2019
Highlights from #WeCNLP so far:
— Chip Huyen (@chipro) September 6, 2019
1. Large pretrained language models work.
2. Knowledge graphs are back in vogue.
3. Dialogue systems are all the rage
Interesting multitask text embeddings by @PinterestEng https://t.co/gABOGQVk0r
— Xavier 🎗️ (@xamat) September 6, 2019
Read about the science behind Aristo's milestone achievement in multiple-choice question answering in our paper "From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project"https://t.co/5b9Bc3fFub pic.twitter.com/wq6Bc0judH
— Allen Institute for Artificial Intelligence (AI2) (@allen_ai) September 5, 2019
E-rater gave students from China high scores for essay length & sophisticated word choice, and higher overall grades than human graders gave.
— Rachel Thomas (@math_rachel) September 4, 2019
E-rater gave African Americans low marks for grammar, style, & organization, and lower overall grades than expert human graders gave them pic.twitter.com/Vw12Rjt4QY
Essay grading software (used in 21 states) focuses on metrics like sentence length, vocab, spelling, & subject-verb agreement, but ignores hard-to-measure aspects like creativity.
— Rachel Thomas (@math_rachel) September 4, 2019
Meaningless gibberish essays created with sophisticated words score well.https://t.co/ESgE8WSRO2 pic.twitter.com/sxC37KiH3Q