Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by mertrory on 2018-06-14 (UTC).

The different goals ppl have in mind when they say "to interpret a model". 1)What can I change to alter outcome?, 2)what part of this data-point influenced this decision (most)?, 3)which historical (training) data-points influenced this decision?, 4)show me typical failure modes

— Mert Sabuncu (@mertrory) June 14, 2018
misc
by mertrory on 2018-06-14 (UTC).

5) what (about present data-point and/or historical data) caused this failure mode?, 6) what else can I do to avoid (such) errors?, 7) how would the decision change if model had this additional piece of info, 8) how confident is the model in its decision?

— Mert Sabuncu (@mertrory) June 14, 2018
misc
by mertrory on 2018-06-14 (UTC).

9)where does the model confidence come from (historical data or inductive bias)?, 10)how can I make it less/more confident in these cases, 11)how would the model's decision change if we collected more training data, 12)what happens when nature of data changes (distribution shift)

— Mert Sabuncu (@mertrory) June 14, 2018
by mertrory on 2018-06-14 (UTC).

I compiled this list in about 10 minutes. So it is probably redundant to some extent and most likely non-exhaustive. Bottom line is, let's avoid cliches such as "sparse/linear/etc models are more intrepretable." Let's make precise statements about our goals.

— Mert Sabuncu (@mertrory) June 14, 2018

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib