Thoughtful piece. And increasingly important as we decide to algorithmically, and sometimes non transparently implement methods deeper in the process. Will that make cognitive bias from humans less likely, or just embed them more deeply? A challenge for big data and AI.
Five Ways to Fix Statistics in Nature
By Jeff Leek, Blakeley B. McShane, Andrew Gelman, David Colquhoun, Michèle B. Nuijten & Steven N. Goodman
As debate rumbles on about how and how much poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science. The common theme? The problem is not our maths, but ourselves.
To use statistics well, researchers must study how scientists analyse and interpret data and then apply that information to prevent cognitive mistakes.
In the past couple of decades, many fields have shifted from data sets with a dozen measurements to data sets with millions. Methods that were developed for a world with sparse and hard-to-collect information have been jury-rigged to handle bigger, more-diverse and more-complex data sets. No wonder the literature is now full of papers that use outdated statistics, misapply statistical tests and misinterpret results. The application of P values to determine whether an analysis is interesting is just one of the most visible of many shortcomings. .... "
Saturday, December 02, 2017
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment