Slides for our presentation "AI for healthcare: Scaling access and quality of care for everyone" with @anithakan today at #MLConfSF: https://t.co/cEHhaohifa.
β Xavier Amatriain (@xamat) November 9, 2019
Slides for our presentation "AI for healthcare: Scaling access and quality of care for everyone" with @anithakan today at #MLConfSF: https://t.co/cEHhaohifa.
β Xavier Amatriain (@xamat) November 9, 2019
Thread: DHH receives credit limit 20x higher than his wife for @AppleCard, even though it turns out his wife has a higher credit score than him. When they try to find out why, are told by Apple "I swear we're not discriminating. It's just the algorithm." https://t.co/4RRKcJghKq
β Rachel Thomas (@math_rachel) November 9, 2019
Released a new analysis showing that compute for landmark AI models from before 2012 grew at exactly Moore's Law.
β Greg Brockman (@gdb) November 7, 2019
From 2012-2018, every 1.5 years compute grew the amount that used to take a decade.
Deep learning is 60, not 6, years of steady progress: https://t.co/fPj6AD2XND
We've analyzed compute used in major AI results for the past decades and identified two eras in AI:
β OpenAI (@OpenAI) November 7, 2019
1) Prior to 2012 - AI results closely tracked Moore's Law, w/ compute doubling every two years.
2) Post-2012 - compute has been doubling every 3.4 months https://t.co/DsFf0qpp0s pic.twitter.com/ILN5MRrWYH
`numpy.split(ary, indices_or_sections, axis=0)` vs. `torch.split(tensor, split_size_or_sections, dim=0)`; indices or sizes for each chunk, fight! (more seriously though still finding it hard to keep track of which var is tensor/array and remembering the random api differences π§)
β Andrej Karpathy (@karpathy) November 6, 2019
Chrome extension request: Highlighting a paragraph of text reports the GPT-2 log prob of that text. Maybe it's not worth reading? :) Or maybe it just highlights the areas in walls of text that have a low log prob to help manage your attention.
β Andrej Karpathy (@karpathy) November 6, 2019
If you like to think deeply about building and measuring intelligence, I highly recommend this read. It's a cogent review of intelligence measurement, it's challenges, past approaches, and their limitations. @fchollet also takes the brave step of proposing a path forward. https://t.co/dKOiiBYltN
β Brandon Rohrer (@_brohrer_) November 6, 2019
As a reviewer, it is important to write an encouraging note at the end of your review, even if you choose to reject a paper. Everyone has spent a few months of their lives producing the 8 pages of work you reviewed. https://t.co/fbgXW7qwWm
β hardmaru (@hardmaru) November 6, 2019
Misleading/over-hype isn't even always about directly selling a product. I still think about this example of Google describing an open-source library for recognizing gene mutations in a way that was so misleading that a Johns Hopkins Prof wrote an article rebutting it. pic.twitter.com/mhoYhKgGWW
β Rachel Thomas (@math_rachel) November 5, 2019
80% of building a modern machine learning team is sending each other links to various cloud ETL solutions on Slack and saying, βCurious about your thoughts on this.β
β Vicki Boykis (@vboykis) November 5, 2019
More generally, algorithmic accountability cannot be automated. https://t.co/WD1QVqe1oe
β Cathy O'Neil (@mathbabedotorg) November 5, 2019
A p-value is NOT the probability that the null hypothesis is correct. Not even close. Persistent and pernicious misunderstanding.
β Data Science Fact (@DataSciFact) November 5, 2019