Well worth a read…
— Mara Averick (@dataandme) September 22, 2020
"AI ethics is not an optimization problem" by @zkajdan https://t.co/jaNUOk8Ips pic.twitter.com/DB6Yv22mDh
Well worth a read…
— Mara Averick (@dataandme) September 22, 2020
"AI ethics is not an optimization problem" by @zkajdan https://t.co/jaNUOk8Ips pic.twitter.com/DB6Yv22mDh
In which I try to explain how recommendation algorithms work and can be manipulated:https://t.co/mJXnI63adl
— Cathy O'Neil (@mathbabedotorg) September 21, 2020
Progression of 3 waves of AI ethics:
— Rachel Thomas (@math_rachel) August 23, 2020
1st- abstract principles, dominated by philosophers
2nd- geared towards technical fixes, led by computer scientists
3rd- focused on practical mechanisms for rectifying power imbalances & achieving justice
https://t.co/urDnZRwLsi @carlykind_ pic.twitter.com/hmTjNVVJTU
Some great articles re algorithmic colonialism:https://t.co/9YmrTNJkaD @Abebab https://t.co/yhNdYXGdxq @amymaxmen https://t.co/BklGSLZRDU @AdrienneLaF https://t.co/AE0OMseKlj @ruchowdh
— Rachel Thomas (@math_rachel) August 19, 2020
Some great articles on bias & fairness:https://t.co/o0hi2pGTsX @random_walker https://t.co/szXolaIsQB @timnitGebru https://t.co/fSH7e6NAH9 @harini824 https://t.co/fVCk5utBVp @samirpassi https://t.co/c4PGpAXEAT @umaivodj
— Rachel Thomas (@math_rachel) August 19, 2020
"Data are not bricks to be stacked, oil to be drilled, gold to be mined, opportunities to be harvested. Data are humans to be seen, maybe loved, hopefully taken care of." @rajiinio 13/ pic.twitter.com/fEVIABnPJA
— Rachel Thomas (@math_rachel) August 12, 2020
Videos from @StanfordAIMI Symposium are up! I spoke on why we need to expand the conversation on bias & fairness.
— Rachel Thomas (@math_rachel) August 12, 2020
I will share some slides & related links in this THREAD, but please watch my 17-minute talk in full (the other talks are excellent too!) 1/https://t.co/QtFt0OgURs pic.twitter.com/FXXyVNgAHr
This shit doesn’t work. It. Doesn’t. Work. https://t.co/Bivzx6HCrq
— Ryan Calo (@rcalo) August 9, 2020
This thread is worth rereading: https://t.co/c5oArnzDkM
— Rachel Thomas (@math_rachel) August 5, 2020
Just came across this nice page on Responsible AI Practices - what I like about it, it includes a number of technical references and guides (e.g., it's actionable, and high quality). I read a few in the past and can def recommend them.https://t.co/kIc0FGHs4W
— Josh Gordon (@random_forests) August 4, 2020
ML systems can have effect of centralizing power, b/c they can:
— Rachel Thomas (@math_rachel) July 31, 2020
- be used at massive scale, cheaply
- replicate identical biases/errors at scale
- be used to evade responsibility
- often implemented with no system for recourse & no way to identify mistakes
- create feedback loops
I've gotten a lot of question about this recent so:
— Rachael Tatman (@rctatman) July 31, 2020
I would not recommend using neural language generation (BERT, GPT-3, etc.) to generate text you send to users.
Why?
It *will* produce plausible sounding but factually incorrect output. Not if but when.