Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by zacharylipton on 2019-08-30 (UTC).

By default, logistic regression in scikit-learn runs w L2 regularization on and defaulting to magic number C=1.0. How many millions of ML/stats/data-mining papers have been written by authors who didn't report (& honestly didn't think they were) using regularization?

— Zachary Lipton (@zacharylipton) August 30, 2019
miscresearch
by zacharylipton on 2019-08-31 (UTC).

Thanks to ML Twitter & @scikit_learn team for fascinating debate over default values & leaks b/w modeling/software. Takeaways: 1) Maybe change name to RegularizedLogisticRegression? 2) These problems plague all ML/DL libraries, esp. in deep learning & we must take it srsly (1/2) https://t.co/IAmuBZ9Uov

— Zachary Lipton (@zacharylipton) August 31, 2019
misc
by zacharylipton on 2019-09-01 (UTC).

Christ, @ryxcommar took the incredible thread following my gripe on sklearn's surprising defaults for LogisticRegression (L2 reg, C=1.0) and turned it into a marvelous blog post, fleshing out why precisely this trend (in many ML libraries) is dangerous.
→https://t.co/CNbxWfoe6N https://t.co/IAmuBZ9Uov

— Zachary Lipton (@zacharylipton) September 1, 2019
surveymisc

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib