Tweeted By @zacharylipton
NLPers worry abt models relying on "spurious" correlations. Our paper casts problem w. causal thinking (https://t.co/OaWuZFG39Q), collecting counterfactually-augmented data. Humans here inject knowledge w/o which "spurious cues" (we suggest may) fundamentally not be identifiable. https://t.co/lTrStVXUJl
— Zachary Lipton (@zacharylipton) December 17, 2019