This goes in the category of, "if humans have no idea how to do something, let's close our eyes and assume the algorithm meant to do it is working perfectly". h/t @dbiello https://t.co/toT3vaWYfl
— Cathy O'Neil (@mathbabedotorg) November 2, 2018
This goes in the category of, "if humans have no idea how to do something, let's close our eyes and assume the algorithm meant to do it is working perfectly". h/t @dbiello https://t.co/toT3vaWYfl
— Cathy O'Neil (@mathbabedotorg) November 2, 2018
“Meritocracy” https://t.co/riSz3QKIx4
— Angela Bassa (@AngeBassa) October 22, 2018
This guy was also concerned that being "over-sensitive" was causing people to see bias everywhere.
— Rachel Thomas (@math_rachel) October 20, 2018
The other possibility is that the reason we see bias in so many places is because bias exists in so many places. (This has been thoroughly documented & researched.) https://t.co/tcU4H8hrLE
With examples like this still occurring in commercial products in 2018, we have so much more work to do before we have to worry about "going too far" in addressing bias: https://t.co/rvajpcJOh9
— Rachel Thomas (@math_rachel) October 19, 2018
paper about five "traps" that fair-ML work can fall into
— Rachel Thomas (@math_rachel) October 17, 2018
h/t @datasociety https://t.co/XJA5YDSkmD
"Bias is a slippery statistical notion, which may disappear if you slice your data a different way. Discrimination, as a causal concept, reflects reality and must remain stable" — @yudapearl cuts to the heart of an issue that much of the recent FAT literature struggles with.
— Zachary Lipton (@zacharylipton) October 12, 2018
Can we please stop thinking that advanced tech can solve policy problems? https://t.co/gTBsGVi5xv
— Rachel Thomas (@math_rachel) October 11, 2018
Replies indicate that people are very confused about what "bias" means. It means doing pattern recognition based on spurious correlations, as opposed to causal reasoning. A ML model will use all correlations found in the training data, and typically many of them will be spurious.
— François Chollet (@fchollet) October 11, 2018
"How to make a racist AI without really trying." Fantastic post, with good code and reasonable first steps. (Negative values indicate negative sentiment.)https://t.co/ybSxwHbcgl (h/t @davidjustodavid) pic.twitter.com/2P973zBpHM
— Brad Voytek (@bradleyvoytek) September 25, 2018
My @py_bay keynote on ethics & bias in AI includes both negative & positive case studies, common misconceptions about tech, and some healthier principles for thinking about ethics:https://t.co/2KiP4MNjKZ
— Rachel Thomas (@math_rachel) September 23, 2018
subtle biases pile up, but because few of them on their own rise to the level of “let’s take action,” they continue. Attempts to point them out label you a complainer or “difficult.” -- former CMU CS prof @BlumLenore https://t.co/JDKOm2Rxil pic.twitter.com/1M31VL8JzY
— Rachel Thomas (@math_rachel) September 7, 2018
The biased algorithms debate must not stop at questions like “Is the data processing fairer if its error rate is the same for all races and genders?”
— Rachel Thomas (@math_rachel) August 23, 2018
We must consider broader questions, such as whether these tools should be developed and deployed at all.https://t.co/n6t2Ze0JWz