Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by zacharylipton on 2019-08-24 (UTC).

Arguably AI/ML research was held up pre-deep learning by the mistaken belief that empirical questions required foundational answers. Now, as AI researchers look beyond supervised learning (causality, robustness, distribution shift) the folly’s turned converse. #feelthelearn

— Zachary Lipton (@zacharylipton) August 24, 2019
thought
by zacharylipton on 2019-08-24 (UTC).

Part 1 TLDR. For supervised learning, the "throw spaghetti at the wall model" works. And the preoccupation with using theoretically-guaranteed methods was arguably misplaced. If anything theory was more useful for insight, not guarantees you'd ever actually use. (5/n)

— Zachary Lipton (@zacharylipton) August 24, 2019
thought
by zacharylipton on 2019-08-24 (UTC).

More ambitious problems (causality, robustness, extrapolation beyond support) don't provide the safety net (validation set) that brute force requires (no after-the-fact-guarantees). Suddenly, just as we've accepted empiricism, we find ourselves needing foundational theory (6/n)

— Zachary Lipton (@zacharylipton) August 24, 2019
thought

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib