A really convenient way of using Fairness Indicators in your machine learning pipeline: https://t.co/1L0o8KqNyN https://t.co/GZXWZBYXJf pic.twitter.com/HYgQiA1JAG
— hardmaru 😷 (@hardmaru) November 2, 2019
A really convenient way of using Fairness Indicators in your machine learning pipeline: https://t.co/1L0o8KqNyN https://t.co/GZXWZBYXJf pic.twitter.com/HYgQiA1JAG
— hardmaru 😷 (@hardmaru) November 2, 2019
Wow: A health algorithm used in hospitals dramatically underestimated the medical needs of black patients, reducing the care they could get. "Correcting the bias would more than double the number of black patients flagged as at risk" https://t.co/K2C4Rb8ONB by @Carolynyjohnson pic.twitter.com/0npmiUgeD8
— Drew Harwell (@drewharwell) October 24, 2019
So ImageNet Roulette happened. Made w @trevorpaglen & @wiretapped to look behind the curtain of training data, we’ve been amazed to see how much the public and AI community engaged. We’ll keep it online for a week, then it returns to a video installation. https://t.co/jt2eejvIKJ pic.twitter.com/bDAKx29adK
— Kate Crawford (@katecrawford) September 20, 2019
We've fine-tuned GPT-2 using human feedback for tasks such as summarizing articles, matching the preferences of human labelers (if not always our own). We're hoping this brings safety methods closer to machines learning values by talking with humans. https://t.co/ok9jeMP5zj
— OpenAI (@OpenAI) September 19, 2019
Disinformation is not just about fancy tech like deepfakes. It's about:
— Rachel Thomas (@math_rachel) September 6, 2019
- a polarized society that can't agree on sources of truth
- misleading context
- human propensity to trust what confirms our pre-existing beliefs
- incendiary content more popular than fact-check version 1/n
E-rater gave students from China high scores for essay length & sophisticated word choice, and higher overall grades than human graders gave.
— Rachel Thomas (@math_rachel) September 4, 2019
E-rater gave African Americans low marks for grammar, style, & organization, and lower overall grades than expert human graders gave them pic.twitter.com/Vw12Rjt4QY
Essay grading software (used in 21 states) focuses on metrics like sentence length, vocab, spelling, & subject-verb agreement, but ignores hard-to-measure aspects like creativity.
— Rachel Thomas (@math_rachel) September 4, 2019
Meaningless gibberish essays created with sophisticated words score well.https://t.co/ESgE8WSRO2 pic.twitter.com/sxC37KiH3Q
In the USA, housing discrimination has a long and vile history. Vast housing discrimination continues today, albeit often in subtler form. A new HUD rule would effectively encourage discrimination by algorithm. @aselbst https://t.co/1AilO1y5AH pic.twitter.com/kbYHGVTyz4
— Rachel Thomas (@math_rachel) August 19, 2019
Biased algorithms often bypass current anti-discrimination law with impunity. HUD is trying to make it official for housing algorithms. https://t.co/2ZZ9ER4VQA
— Cathy O'Neil (@mathbabedotorg) August 19, 2019
“algorithms—and content moderators who grade the test data that teaches these algorithms how to do their job—don’t usually know the context of the comments they’re reviewing”
— Angela Bassa (@AngeBassa) August 15, 2019
Then what even is the point of the labeling exercise?! https://t.co/G7qtiMJJqV
Amen. "The current approach—in which an algorithm tries to recommend the most engaging videos without worrying about whether they're any good—has got to go." https://t.co/PRxxYCnuHU
— David Smith (@revodavid) August 13, 2019
Recently, I appeared on a @IEEEtv panel on "Bias in the Age of the Algorithm" with @mathbabedotorg & @StenderWorld, moderated by @MarkVee10. #IEEETechEthics @ComputerSociety
— Erin LeDell (@ledell) July 29, 2019
The video 📺 is now online: https://t.co/SEXOWZEkR9