Word embeddings quantify 100 years of gender and ethnic stereotypes
— Rachel Thomas (@math_rachel) April 2, 2019
paper: https://t.co/1Wk7kSINpg
code: https://t.co/nBi4WkmG4p
Word embeddings quantify 100 years of gender and ethnic stereotypes
— Rachel Thomas (@math_rachel) April 2, 2019
paper: https://t.co/1Wk7kSINpg
code: https://t.co/nBi4WkmG4p
YouTube CPO @nealmohan ignores YouTube's Autoplay function (even though it drives 70% of time users spend on site) in his answer suggesting users can click on anything @bafeldman https://t.co/F6xyNptmTH pic.twitter.com/gt2dZe26n1
— Rachel Thomas (@math_rachel) March 31, 2019
Current methods for debiasing word embeddings are insufficient. They hide bias, rather than removing it, and a lot of the bias information can be recovered. https://t.co/s7RUtYDSnX pic.twitter.com/Q9h71vsnof
— Rachel Thomas (@math_rachel) March 25, 2019
I've noticed weird tweets showing up from people I don't follow and wondered if I'd be hacked. In reality, it is a Twitter program, at least so this report suggests https://t.co/lYTWEySIWf
— Tim Wu (@superwuster) March 23, 2019
Nice paper by @hila_gonen and @yoavgo shows that popular embedding debiasing techniques focus myopically on a specific (linear) definition of bias but only "Put Lipstick on a Pig". Gender-association of charged word pairs remains / easy to detect w k-means https://t.co/aJIOlJQPbO
— Zachary Lipton (@zacharylipton) March 19, 2019
Flash forward three years, and now Facebook is eliminating all sensitive ad targeting categories for housing, employment and credit. This is a huge change, and it will hopefully open up more opportunities for people who would otherwise have been denied them. /9
— Julia Angwin (@JuliaAngwin) March 19, 2019
Pretty cool to be interviewed for CNBC about algorithmic bias. I think the piece is great and I want to offer one point of clarification. https://t.co/6F3bMm0tdG
— Kristian Lum (@KLdivergence) March 19, 2019
Facebook's News Feed algorithm seems to be surfacing more fake news than ever. https://t.co/7b1OMXFlBY pic.twitter.com/L1J6yYgJKF
— Nate Silver (@NateSilver538) March 16, 2019
If we don’t want the future to look like the past, we can’t just unthinkingly apply machine learning. – Nitin Kohli
— Rachel Thomas (@math_rachel) March 7, 2019
The more accurately a recommendation engine like @Netflix or @YouTube pegs your interests, the faster it traps you in an information bubble. by @_KarenHao https://t.co/aUHuOEQMxO
— MIT Technology Review (@techreview) March 7, 2019
Prompting OpenAI's new language model with:
— Tomer Ullman (@TomerUllman) March 1, 2019
'My wife just got an exciting new job, starting next week she'll be…'
vs.
'My husband just got an exciting new job, starting next week he'll be…'
Oi. pic.twitter.com/7WxZo10Lmk
I'm a little worried that the "futurists who believe in a omnipotent AI" crowd has taken over the AI accountability and ethics conversation.
— Cathy O'Neil (@mathbabedotorg) February 13, 2019
And yet it's a very good sign that the Pentagon is starting to care about biased algorithms.https://t.co/1WCUruRAky