"Meta-Learning: Learning to Learn Fast" https://t.co/mPR6qbAjkd an incredible blog post by @lilianweng
— mat kelcey (@mat_kelcey) December 9, 2018
"Meta-Learning: Learning to Learn Fast" https://t.co/mPR6qbAjkd an incredible blog post by @lilianweng
— mat kelcey (@mat_kelcey) December 9, 2018
As much as machine learning researchers like to imagine the next great medical solution will come from a fantastical neural network architecture or an N-dimensional scan, much of the progress will likely come from understanding the prevalence of faulty and/or ignored history. https://t.co/Ean0K2gu0b
— Smerity (@Smerity) December 9, 2018
Handy summary of seven things @ylecun and I agree about [from my slides from NYU debate (https://t.co/4ZAN3aeZ6g), agreed on in advance over email]: pic.twitter.com/drjBOQoy1G
— Gary Marcus (@GaryMarcus) December 9, 2018
I am quite impressed with the way this piece talks about sample sizes, how/when results are generalizable, and just overall why some things are SO COMPLICATED.https://t.co/vbYAemOfqE
— Julia Silge (@juliasilge) December 8, 2018
“Fairness is difficult to define, but unfairness is much easier to spot,” says @timnitGebru. @mmitchell_ai adds that we’re overfocused on defining fairness for #ML models & need to include data, team, and colloquial definitions #NeurIPS #NeurIPS2018 pic.twitter.com/IpbUj3z1ZO
— Mariya Yao (@thinkmariya) December 8, 2018
This is one reason that I recommend Twitter as a way to learn data science. You can see what terminology people are using, and look up what you don't know.https://t.co/20rmG4aaRU https://t.co/YtLKz85ulf
— Data Science Renee (@BecomingDataSci) December 8, 2018
Engineers tend to respond to problems with "what can I make/build?" Sometimes the answer is to NOT make/build. #QueerInAI #NeurIPS2018
— Rachel Thomas (@math_rachel) December 8, 2018
"When the implication is not to design (technology)", link to PDF: https://t.co/SBeh3wWWST pic.twitter.com/yqyqbucdtc
I'm glad that reporters are asking Justin Trudeau about the #NeurIPS2018 visa issues & that this story is getting media coverage. h/t @timnitGebru https://t.co/KYVFno2nub
— Rachel Thomas (@math_rachel) December 8, 2018
Hacking machine learning.
— Hilary Mason (@hmason) December 7, 2018
Years ago we used to have a custom of taking anyone who was assigned the project of writing an anti-porn classifier out for a very good scotch. It's a very difficult thing to do. https://t.co/BEyJh8Uas7
You don’t have a brand until someone else tells you what it means. Until then you just have a logo, a mark, a word, a personal vision of what you want your business to be. A reflection, not introspection, is what gives a brand shape and meaning.
— Jason Fried (@jasonfried) December 7, 2018
Trying to find reasonable size NLP datasets such as NER and POS for my students, and everything I've found is behind slow and/or broken registration systems. Why is the NLP world so closed and exclusive?
— Jeremy Howard (@jeremyphoward) December 7, 2018
I'm really not sure what to do for our course at this stage :( pic.twitter.com/X3e2VOKX87
This paper helped the research community escape the SVM winter (during which math elegance won at the expense of practical utility).
— Jeremy Howard (@jeremyphoward) December 7, 2018
Interesting to hear how open source and an accessible tutorial contributed to its success.
Just writing the paper is rarely enough to make impact https://t.co/2ItBa9MELQ