5 Lenses for Ethics:
— Rachel Thomas (@math_rachel) November 16, 2019
- utilitarianism
- rights
- common good
- fairness or justice
- virtue@IEthics #TechPolicyCADEhttps://t.co/3HKgZuW6NN pic.twitter.com/GItdLbUIj1
5 Lenses for Ethics:
— Rachel Thomas (@math_rachel) November 16, 2019
- utilitarianism
- rights
- common good
- fairness or justice
- virtue@IEthics #TechPolicyCADEhttps://t.co/3HKgZuW6NN pic.twitter.com/GItdLbUIj1
Law of amplification: Tech's primary effect is to amplify human forces. Like a lever, tech amplifies people's capacities in the direction of their intentions.@kentarotoyama quoted by @hutchamachutch in her talk on how tech facilitates mass atrocity #TechPolicyCADE pic.twitter.com/JVNmAYViWa
— Rachel Thomas (@math_rachel) November 16, 2019
Nice to see R in a NYT headline (well, subhed) again: https://t.co/GuM4fGMsh9 #rstats
— David Smith (@revodavid) November 15, 2019
We don't deserve machine learning.
— hardmaru (@hardmaru) November 15, 2019
“Provide a tool that can gauge a person's personality just from an image of their face. This can then be used by an HR office to help out with sorting job applicants.” 🤨🤔 https://t.co/RbjV7qLWVU
Tech is the most trusted industry by the general population. Perceptions of the contrary are created by its competitor — the media industry — having a very big megaphone https://t.co/G2jA49tnWw
— Florent Crivello 🌐 (@Altimor) November 15, 2019
Reporters writing about AI:
— hardmaru (@hardmaru) November 15, 2019
Do: emphasize the narrowness of today’s AI-powered programs
Do: avoid comparisons to pop culture depictions of AI
Don’t: cite AI opinions of famous smart people
Do: make clear what the task is
Do: call out limitations
Don’t: ignore the failures https://t.co/de2fV8o07L
Rereading this excellent piece by @LeeFlower on why "assume good intent" is counterproductive to diversity and inclusion, and does not belong in a code of conduct. https://t.co/wvyVYWr9Zk
— Kara Woo (@kara_woo) November 15, 2019
Machine learning is a bit like cocaine in the 1880s:
— Reuben Binns (@RDBinns) November 14, 2019
- been used in a weaker form for centuries
- some surprisingly successful early applications led to it now being over-prescribed
- beginning to understand that performance degrades after repeated use, negative feedback loops
So many AI hiring systems are appallingly bad.
— Janelle Shane (@JanelleCShane) November 14, 2019
Better to draw the resumes at random out of a hat; at least then you know it won't be copying bias. https://t.co/0FKY69l7Jx
Even beyond portfolios, I think there's a huge benefit in mixing different skills & areas of development 💯
— Ines Montani 〰️ (@_inesmontani) November 14, 2019
It's also key to our company. I wrote a post on "how front-end can improve AI" back in 2016 (!) which outlined that philosophy & product vision: https://t.co/MWd9qcwVjC https://t.co/bbADpvsjNP
“I’m going to work on artificial general intelligence.”
— hardmaru (@hardmaru) November 14, 2019
– John Carmackhttps://t.co/KEDZnvareU pic.twitter.com/6tuk78hWvd
1) I made a DEV account (it's rctatman, ofc)
— Rachael Tatman (@rctatman) November 13, 2019
2) I wrote a quick post with my thoughts on a question I've been seeing asked a lot: "Are BERT and other large language models conscious?" #DEVcommunity https://t.co/AD2iXljqIj