Ratings5
Average rating4.2
Scientific progress depends on good research, and good research needs good statistics. But statistical analysis is tricky to get right, even for the best and brightest of us. You'd be surprised how many scientists are doing it wrong.
Statistics Done Wrong is a pithy, essential guide to statistical blunders in modern science that will show you how to keep your research blunder-free. You'll examine embarrassing errors and omissions in recent research, learn about the misconceptions and scientific politics that allow these mistakes to happen, and begin your quest to reform the way you and your peers do statistics.
You'll find advice on:
- Asking the right question, designing the right experiment, choosing the right statistical analysis, and sticking to the plan
- How to think about p values, significance, insignificance, confidence intervals, and regression
- Choosing the right sample size and avoiding false positives
- Reporting your analysis and publishing your data and source code
- Procedures to follow, precautions to take, and analytical software that can help
Scientists: Read this concise, powerful guide to help you produce statistically sound research. Statisticians: Give this book to everyone you know.
The first step toward statistics done right is Statistics Done Wrong.
Reviews with the most likes.
A brief, lovely, vaguely horrifying overview of how endemic “bad statistics” is. This is mostly pitched to the statistics practitioner - and especially one coming from academia. In other words, this would've been catnip to me like ~5 years ago. But, for now, having already cleansed myself in the work of Data Colada, Gelman and Ioannidis, much of this was old hat.
Yes, people over-rely on and misinterpret p-values. Yes, people “double-dip” and torture/exhaust their data, hunt for statistically significant results (green jelly beans!) with multiple comparisons, put negative or non-results in the “filing cabinet” and suffer from the “winner's curse” (where randomly large results are more likely to hit the p-value bingo and thus get reported, leading to an upward bias). In fact, EVERYTHING leads to an upward bias in results - as Ioannidis said, most research findings are probably false. Or, at least, not as big and positive as we so believe.
I thought this would have a bit more practical stuff, a bit more Bayes (BAYES), and a bit of a wider scope. The last sections, on the perverse incentive structures of academia (pre-analysis plans that no one really signs up for; journals that reward “winner's cursey” BIG, POSITIVE results, p-hacking), were definitely interesting and got my fist shaking. But I'm not in that world anymore, and so I'm kinda like, “oh well, dudes”. I mean, there is a LOT wrong with academia's incentive structures, and, yes, they definitely corrupt the pure Science, but what about practitioners in industry? Oh well.