Homepage
Close
Menu

Site Navigation

  • Home
  • Archive(TODO)
    • By Day
    • By Month
  • About(TODO)
  • Stats
Close
by joelgrus on 2018-12-12 (UTC).

So I totally agree that this is a real problem and worth solving. But my finely-honed instincts tell me that customizing your kernel from within the running kernel is *dangerous* and *error* prone. (Yes, I know you can already do this with bangs.) 4/13

— Joel Grus (@joelgrus) December 12, 2018
thought
by joelgrus on 2018-12-12 (UTC).

I was told that "yes, this is an antipattern if you're running notebooks locally, but it's good if you're running them remotely". The challenge is that (as I understand it), *notebooks themselves don't make this distinction*. 6/13

— Joel Grus (@joelgrus) December 12, 2018
thought
by joelgrus on 2018-12-12 (UTC).

The other half of my concern is that modifying the kernel from within the kernel (you guessed it) makes your code harder to reason about. For example, you could imagine a situation like

[1] import torch
[2] x = torch.tensor(1)
[3] %pip install torch==1.0.0
[4] y = x.sum()

8/13

— Joel Grus (@joelgrus) December 12, 2018
thought

Tags

learning tutorial misc nlp rstats gan ethics research dataviz survey python tool security kaggle video thought bayesian humour tensorflow w_code bias dataset pytorch cv tip application javascript forecast swift golang rl jax julia gnn causal surey diffusion
© Copyright Philosophy 2018 Site Template by Colorlib