The best ideas rarely get accepted right away. https://t.co/JWus4ooG8w
— hardmaru (@hardmaru) November 29, 2018
The best ideas rarely get accepted right away. https://t.co/JWus4ooG8w
— hardmaru (@hardmaru) November 29, 2018
I think Tukey was also right: "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise." There's considerable tension between these two points of view, which needs to be managed
— michael_nielsen (@michael_nielsen) November 29, 2018
The specifics are spot-on but I disagree a bit overall. A simple approach solves a problem complex RL algorithms can’t. That’s interesting. Finding pragmatic solutions where current algorithms don’t work leads us to new insights. Not everything has to be model-free end-to-end RL. https://t.co/XATsd6aEnF
— Denny Britz (@dennybritz) November 28, 2018
Insightful critique by @AlexIrpan of the recent Montezuma's Revenge results from Uber AI. https://t.co/MNE6Y6bDH9 He brings up a number of important points regarding the determinism and resetability assumptions the algorithm makes.
— Arthur Juliani (@awjuliani) November 27, 2018
👍
— helena sarin (@glagolista) November 27, 2018
"I still find myself defaulting into a “bottom-up” approach sometimes, because it’s such a habit...
Using something when we do don’t understand the underlying details can feel uncomfortable, and I think the key is to just accept that discomfort and do it anyway." https://t.co/PsUw6tcUTy
What generally happens (in my experience) is internal hype and self-delusion.
— Yann LeCun (@ylecun) November 26, 2018
It's easy to convince yourself and your friends that your ideas are good, hence public exposure seems critical to take the right direction.
— Soumith Chintala (@soumithchintala) November 26, 2018
MIRI: future research "non-disclosed by default"
OpenAI: "reduce our traditional publishing in the future"
Curious to see what happens.
Is RL an exercise in causal inference? Of course! Albeit a restricted one. By deploying interventions in training, RL allows us to infer consequences of those interventions, but ONLY those interventions. A causal model is needed to go BEYOND, i.e., to actions not used in training
— Judea Pearl (@yudapearl) November 23, 2018
Proofs of Concept (POCs) often fail to get acceptance and traction. Instead, focus on demonstrating value & ROI (Return On Innovation) from small implementations with POV (Proof of Value)—this will inspire the cultural change needed for the larger implementations that will come. pic.twitter.com/WAUVx6wOfW
— Kirk Borne (@KirkDBorne) November 20, 2018
Pick your abstractions wisely. Everything else flows from them. Pick simple, clear, consistent ones. Ideally, settle on just one.
— François Chollet (@fchollet) November 19, 2018
Severe specialization stifles creative problem-solving.
— hardmaru (@hardmaru) November 19, 2018
Hypothesis: Technology is now moving so quickly that how fast it spreads is now determined by how quickly we are willing to adopt it.
— Christopher Mims 🎆 (@mims) November 16, 2018
In other words, culture and social norms are the real governor on the rate of innovation.https://t.co/0NO4xkLI8S