Ban use of godfather.. #AI is no mafia @Forbes https://t.co/7XgZnvVtnw
— Prof. Anima Anandkumar (@AnimaAnandkumar) October 21, 2020
Ban use of godfather.. #AI is no mafia @Forbes https://t.co/7XgZnvVtnw
— Prof. Anima Anandkumar (@AnimaAnandkumar) October 21, 2020
Don't cook up use cases for #AI when simple human input solved the problem. At best it is a poor substitute for human. At worst you are propagating #AI #bias ans amplifying and automating it https://t.co/uU6MQvbrRT
— Prof. Anima Anandkumar (@AnimaAnandkumar) October 1, 2020
In which I try to explain how recommendation algorithms work and can be manipulated:https://t.co/mJXnI63adl
— Cathy O'Neil (@mathbabedotorg) September 21, 2020
This was found because users could publicly experiment. But I’d assume a very small percentage of machine learning models are similarly experimentable by users.
— Chris Albon (@chrisalbon) September 21, 2020
Put another way: how many models deciding things are similarly biased but there is no way for the public to find out? https://t.co/XufAbn2jFn
Recording straight men where their eyes veer when they view female pictures is encoding objectification and sexualization of women in social media @Twitter No one asks whose eyes are being tracked to record saliency. #ai #bias https://t.co/coXwngSjiW
— Prof. Anima Anandkumar (@AnimaAnandkumar) September 20, 2020
Twitter’s engineering blog post about how the smart cropping algorithm works (by @alykhantejani @fhuszar @iskorna in 2018)
— hardmaru (@hardmaru) September 20, 2020
(*not to place any blame. I certainly would’ve made the same oversight)
“Speedy Neural Networks for Smart Auto-Cropping of Images”https://t.co/wuJMBPkeOn pic.twitter.com/q6zoOOx7WB
Saying that bias in AI applications is "just because of the datasets" is like saying the 2008 crisis was "just because of subprime mortgages".
— François Chollet (@fchollet) September 20, 2020
Technically, it's true. But it's singling out the last link in the causality chain while ignoring the entire system around it.
Thread: Last week, a list of 100 important NLP papers (https://t.co/PUHTvKCuiI) went viral. The list is okay, but it has almost *no* papers with female first authors.
— Sasha Rush (@srush_nlp) September 7, 2020
NLP is rich with amazing female researchers and mentors. Here is one paper I like for each area on the list:
Some great articles on bias & fairness:https://t.co/o0hi2pGTsX @random_walker https://t.co/szXolaIsQB @timnitGebru https://t.co/fSH7e6NAH9 @harini824 https://t.co/fVCk5utBVp @samirpassi https://t.co/c4PGpAXEAT @umaivodj
— Rachel Thomas (@math_rachel) August 19, 2020
Cool project looking at false positives for wakewords. I especially appreciate the direct links to edit privacy settings. https://t.co/aVnW3LzFms
— Rachael Tatman (@rctatman) August 10, 2020
Today we are sharing the Model Card Toolkit, a collection of tools that support ML developers in compiling the information that goes into a Model Card, and that aid in the creation of interfaces that will be useful for different audiences. Learn more at ↓ https://t.co/CXmEP6P6Pw
— Google AI (@GoogleAI) July 29, 2020
Finally Bezos admits what @suryamattu and I proved years ago.
— Julia Angwin (@JuliaAngwin) July 29, 2020
Bezos: "Indirectly the buy box does favor products that can be shipped with Prime.”https://t.co/83CM3Io7IP