Statistical thought of the day: I don't embrace the Bayesian paradigm because it is without problems. I embrace it because (1) it solves the most problems and (2) null hypothesis significance testing, p-values, and type I error are beyond repair. Mixing Bayes+frequentist=mess.— Frank Harrell (@f2harrell) January 8, 2019
This is a great summary.— Jeremy Howard (@jeremyphoward) December 14, 2018
Most people only mention points 3&6 (which are kinda the same thing?), and are the only ones I'd consider controversial (since any model can be trivially adjusted to output a distribution, so it may not be a big win) https://t.co/Vobu4WEyyO
Good Part 5: Model checking as a core activity— Sean J. Taylor (@seanjtaylor) December 14, 2018
Good Bayesian analyses consider a wide range of models that vary in assumptions and flexibility in order to see how they affect substantive results. There are principled, practical procedures for doing this.
Good Part 2: No need to derive estimators— Sean J. Taylor (@seanjtaylor) December 14, 2018
There are a increasingly full-featured and high-quality tools that allow you to fit almost any model you can write down. Being able to treat model fitting as an abstraction is great for analytical productivity.
A couple days ago another team asked me to speak about Bayesian data analysis. I decided that instead of doing a nuts/bolts of how to fit/use Bayesian models, I would describe "Bayesian analysis: The Good Parts". <potentially controversial thread>— Sean J. Taylor (@seanjtaylor) December 14, 2018