If your product and engineering organizations struggle with red tape and legal roadblocks, machine learning success probably isn't in your future.
β Peter Skomoroch (@peteskomoroch) September 19, 2019
If your product and engineering organizations struggle with red tape and legal roadblocks, machine learning success probably isn't in your future.
β Peter Skomoroch (@peteskomoroch) September 19, 2019
From the abstract, it looks like I'll love this paper, "Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science": https://t.co/HAL30MVXlQ (via post from @StatModeling)
β John Myles White (@johnmyleswhite) September 18, 2019
Very disappointing to see a paper describing Youtube's multi-objective #recsys, but failing to acknowledge there should be ranking objectives besides pure user engagement or more important biases than "positional" bias https://t.co/QVrLGZjSey #recsys2019
β Xavier Amatriain (@xamat) September 18, 2019
Supervised learning makes only 1 assumption (iid), & despite being fairly reasonable (among assumptions) it's always broken, often w. disastrous consequences. How can we develop causality-based intelligent systems when everything we want from CI requires *tons more assumptions*?
β Zachary Lipton (@zacharylipton) September 16, 2019
Schmidhuber's opinion piece about citations in 2011
β hardmaru (@hardmaru) September 12, 2019
They are comparable to the less-than-worthless collateralized debt obligations that drove the recent financial bubble, and, unlike concrete goods and real exports, they are easy to print and inflate. π₯https://t.co/LweILXvnNQ pic.twitter.com/FrGLMaB2No
If you are failing at βregularβ ethics, you wonβt be able to embody AI ethics either. The two are not separate.
β Rachel Thomas (@math_rachel) September 10, 2019
https://t.co/FI085hle0P pic.twitter.com/rqkBu2UGO0
Impostor syndrome is so pervasive because every single one of us got lucky somewhere along the way. The myth of meritocracy is bullshit: talent is equally distributed but opportunity is not.
β Angela Bassa (@AngeBassa) September 10, 2019
So channel your impostor syndrome into opening bigger doors than you had opened for you.
While I love quantitative research, qualitative research so often gets insights my statistical models never could. https://t.co/ywWL1w7Po1
β Chris Albon (@chrisalbon) September 9, 2019
strong disagree. *typing* is such a small fraction of what I do as a programmer. (except when I'm doing a livecoding stunt.) I could type half as fast and probably be just as productive.
β Joel Grus β₯οΈ π @oreillyai (@joelgrus) September 5, 2019
As a new PI, one of my biggest fears is that one day we will make some kind of embarrassing mistake in our research. It's inevitable. That is why we are committed to open data, code, preprints, and where possible, pre-registration. So the community can help us catch errors early.
β Micah Allen (@micahgallen) September 1, 2019
I see many people spent hours of compute training Bert to get worse results that they would get on fraction of cost using ULMFIT or even naive bias. Re IMDb ulmfit has 95% accuracy and 55m parameters, less when sentence-piece & qrnnnis used.
β Piotr Czapla (@PiotrCzapla) September 1, 2019
This is 100% true. However, because interviewers (wrongly) value having algorithms / equations memorized, I created my machine learning flashcards. And literally in every interview I did, I looked better because I could write out an equation from memory. https://t.co/yTRFaHBEzA
β Chris Albon (@chrisalbon) August 31, 2019