Ratings5
Average rating4.2
A brief, lovely, vaguely horrifying overview of how endemic “bad statistics” is. This is mostly pitched to the statistics practitioner - and especially one coming from academia. In other words, this would've been catnip to me like ~5 years ago. But, for now, having already cleansed myself in the work of Data Colada, Gelman and Ioannidis, much of this was old hat.
Yes, people over-rely on and misinterpret p-values. Yes, people “double-dip” and torture/exhaust their data, hunt for statistically significant results (green jelly beans!) with multiple comparisons, put negative or non-results in the “filing cabinet” and suffer from the “winner's curse” (where randomly large results are more likely to hit the p-value bingo and thus get reported, leading to an upward bias). In fact, EVERYTHING leads to an upward bias in results - as Ioannidis said, most research findings are probably false. Or, at least, not as big and positive as we so believe.
I thought this would have a bit more practical stuff, a bit more Bayes (BAYES), and a bit of a wider scope. The last sections, on the perverse incentive structures of academia (pre-analysis plans that no one really signs up for; journals that reward “winner's cursey” BIG, POSITIVE results, p-hacking), were definitely interesting and got my fist shaking. But I'm not in that world anymore, and so I'm kinda like, “oh well, dudes”. I mean, there is a LOT wrong with academia's incentive structures, and, yes, they definitely corrupt the pure Science, but what about practitioners in industry? Oh well.