The amount of talent and the success that a (boring) CRM company like Salesforce is able to amass is yet another good reminder of why *culture matters* (aka culture trumps strategy).
β Xavierππ€π (@xamat) December 12, 2018
The amount of talent and the success that a (boring) CRM company like Salesforce is able to amass is yet another good reminder of why *culture matters* (aka culture trumps strategy).
β Xavierππ€π (@xamat) December 12, 2018
The article also resonates with an old thread with @evolvingstuff about how ignorance is bliss at the beginning of a research endeavor. https://t.co/RFUR4mcfQp
β hardmaru (@hardmaru) December 12, 2018
The other half of my concern is that modifying the kernel from within the kernel (you guessed it) makes your code harder to reason about. For example, you could imagine a situation like
β Joel Grus (@joelgrus) December 12, 2018
[1] import torch
[2] x = torch.tensor(1)
[3] %pip install torch==1.0.0
[4] y = x.sum()
8/13
I was told that "yes, this is an antipattern if you're running notebooks locally, but it's good if you're running them remotely". The challenge is that (as I understand it), *notebooks themselves don't make this distinction*. 6/13
β Joel Grus (@joelgrus) December 12, 2018
So I totally agree that this is a real problem and worth solving. But my finely-honed instincts tell me that customizing your kernel from within the running kernel is *dangerous* and *error* prone. (Yes, I know you can already do this with bangs.) 4/13
β Joel Grus (@joelgrus) December 12, 2018
To ask the right questions itβs useful to keep some distance from the field. If you only read the latest NeurIPS papers that everyone reads you might get locked into a frame of mind and work on exactly the same as everyone else. Excellent read: https://t.co/9mEbMeO05Q pic.twitter.com/nIbNXo8vq3
β Denny Britz (@dennybritz) December 12, 2018
What would a theory of data analysis look like? Some thoughts.... https://t.co/RZgfq2UFM4
β Roger D. Peng (@rdpeng) December 11, 2018
The tech industry's obsession with long hours isn't just harmful for people's health & relationships; it's contrary to the research on productivity. https://t.co/KRjIk5aKi9
β Rachel Thomas (@math_rachel) December 11, 2018
Yes, comment was made in the context of the p value vs confidence interval discussion (as if they were different methods, rather than different inferential summaries within the same method). Ideally we'd throw mult. models at a problem, & assess them from mult. perspectives.
β Richard D. Morey (@richarddmorey) December 11, 2018
I actually mostly agree with that claim because I think it puts the burden on acknowledging that most data analysis is just subjectively driven data compression -- all you generally get is the result of projecting your data onto some model space.
β John Myles White (@johnmyleswhite) December 11, 2018
If you go further with this logic, I think you end up concluding that you should always focus on the full, raw dataset -- since you can only compress it effectively if you either (a) have a correct model or (b) don't mind loss. https://t.co/XzqSBIgPc9
β John Myles White (@johnmyleswhite) December 11, 2018
Glad to see @_MiguelHernan carrying the torch of C-word to the lion-den of #NeurIPS2018. But if I were asked: Do all students of #causalinference agree with this hierarchy, I would say: NO! Lumping all of CI into one level creates confusion and worse See https://t.co/rXgGXv58LL https://t.co/IN0HTEKCKS
β Judea Pearl (@yudapearl) December 9, 2018