A refreshingly contrarian perspective on #AI. @garymarcus argues that we need to start over with first principles like time, space, and causality if we want AI we can trust. https://t.co/k2hnkTnlGn
β Tim O'Reilly (@timoreilly) September 7, 2019
A refreshingly contrarian perspective on #AI. @garymarcus argues that we need to start over with first principles like time, space, and causality if we want AI we can trust. https://t.co/k2hnkTnlGn
β Tim O'Reilly (@timoreilly) September 7, 2019
Highlights from #WeCNLP so far:
β Chip Huyen (@chipro) September 6, 2019
1. Large pretrained language models work.
2. Knowledge graphs are back in vogue.
3. Dialogue systems are all the rage
Disinformation is not just about fancy tech like deepfakes. It's about:
β Rachel Thomas (@math_rachel) September 6, 2019
- a polarized society that can't agree on sources of truth
- misleading context
- human propensity to trust what confirms our pre-existing beliefs
- incendiary content more popular than fact-check version 1/n
"What I learned from being a startup's first Data Engineer" | by @SionekAndre | READ π https://t.co/P8OaJyCosd
β Kaggle (@kaggle) September 6, 2019
Facebook AI rolls out the DeepFake Detection Challenge in partnership with academic labs and other industry leaders. https://t.co/w1CTBRcVA0
β Yann LeCun (@ylecun) September 5, 2019
"A story of my first gold medal in one Kaggle competition: things done and lessons learned" by Andrew Lukyanenko via @TDataScience | READ π https://t.co/eLNEVuzjh0
β Kaggle (@kaggle) September 4, 2019
βEnthusiasm is common. Endurance is rare.β https://t.co/hvjfvWxbaN
β hardmaru (@hardmaru) September 4, 2019
They trained an agent to play SimCity.
β hardmaru (@hardmaru) September 2, 2019
The results look kind of interesting! (cc @togelius) https://t.co/KVU1JjtUFu
Christ, @ryxcommar took the incredible thread following my gripe on sklearn's surprising defaults for LogisticRegression (L2 reg, C=1.0) and turned it into a marvelous blog post, fleshing out why precisely this trend (in many ML libraries) is dangerous.
β Zachary Lipton (@zacharylipton) September 1, 2019
βhttps://t.co/CNbxWfoe6N https://t.co/IAmuBZ9Uov
If you enjoy my ranting, you may enjoy my ranted article on this topic.
β Smerity (@Smerity) September 1, 2019
"Adding ever more engines may help get the plane off the ground...
but that's not the design planes are destined for."https://t.co/aKBLuNpwN0
I don't want to wake up a half decade from now and realize the many fine brains of our field have only been dedicated to finetuning models that are near impossible to reproduce that we've inevitably concentrated the core progress of AI in only a few silos / domains.
β Smerity (@Smerity) September 1, 2019
Whilst pretrained weights can be an advantage it also ties you to someone else's whims. Did they train on a dataset that fits your task? Was your task ever intended? Did their setup have idiosyncrasies that might bite you? Will you hit a finetuning progress dead end?
β Smerity (@Smerity) September 1, 2019