This thread is worth rereading: https://t.co/c5oArnzDkM
— Rachel Thomas (@math_rachel) August 5, 2020
This thread is worth rereading: https://t.co/c5oArnzDkM
— Rachel Thomas (@math_rachel) August 5, 2020
I'm a fan of @skyetetra's article https://t.co/YM1AFCiKYX, but I'm interested in hearing other ideas and also how to put it into practice (e.g. after you decide which projects to prioritize, how do you talk to the teams and maintain a good relationship)?
— Emily Robinson (@robinson_es) August 4, 2020
The moment I knew I was going to love working at @wikimedia was when someone's kid appeared on the camera and started grabbing stuff for coloring and it was clear from everyone's reaction that not only was this perfectly okay, it wasn't even noteworthy. https://t.co/0PTu6QxzCe
— Chris Albon (@chrisalbon) August 4, 2020
Just came across this nice page on Responsible AI Practices - what I like about it, it includes a number of technical references and guides (e.g., it's actionable, and high quality). I read a few in the past and can def recommend them.https://t.co/kIc0FGHs4W
— Josh Gordon (@random_forests) August 4, 2020
I'm super grateful to Patrick @openminedorg for this README
— Andrew Trask (@iamtrask) August 3, 2020
Documentation isn't about code - it's about teaching a person a mental model - *then* about linking that model to code
Taking the time to *really* README teach == ❤️
Not all heroes wear capeshttps://t.co/fpDmhMeosj pic.twitter.com/mwyRaCR2ji
Even at its best, data science/statistics/ML/AI can only answer questions. Asking good questions is entirely up to us.
— Brandon Rohrer (@_brohrer_) August 3, 2020
A very short history of some times we solved AIhttps://t.co/UZ7Opi8xIk
— Julian Togelius (@togelius) August 3, 2020
Here’s a great example of how syntax alone can produce excessive cognitive load on learners. In SQL @b0rk points out that “how it’s written” != “how you should think”. This is also why I almost never nest SQL. Too heavy of a cognitive load.
— JD Long (@CMastication) August 2, 2020
Source:https://t.co/EeiLXN9Ly3 pic.twitter.com/DBC2Gfxc40
I really like the new Methods section in @paperswithcode to find applications and similar methods.
— Sebastian Ruder (@seb_ruder) August 2, 2020
For language models in NLP, you can see at a glance the most common LMs and explore the papers that employ them.https://t.co/O2MT0e3XQY pic.twitter.com/MalqHntT93
In conclusion, here's the perfect combo:
— Aurélien Geron (@aureliengeron) August 2, 2020
online conferences + local meet ups + actual vacations.
You get all the benefits of physical conferences, without any of the drawbacks. 🥳
What do you think?
The futurists predicted a singularity where AI recursively improves AI, but what we've got instead is AI feeding on text generated by AI. https://t.co/wpXCIZ1BA7
— Arvind Narayanan (@random_walker) August 1, 2020
ML systems can have effect of centralizing power, b/c they can:
— Rachel Thomas (@math_rachel) July 31, 2020
- be used at massive scale, cheaply
- replicate identical biases/errors at scale
- be used to evade responsibility
- often implemented with no system for recourse & no way to identify mistakes
- create feedback loops