Following this for some time. Reproducing results is often context dependent. It is a serious problem, especially in the social sciences, but elsewhere too. One reason you is need to consider context, and related biases carefully. Of course if what is driving having reproduced results is confirmation bias, that's another problem. Have seen both. And by the way having worked with DARPA, I disagree with calling it a 'mad-science' wing, they are very serious and rational there.
DARPA Wants to Solve Science's Replication Crisis With Robots
By Wired, reprinted in CACM
At the same instant that a significant chunk of policymakers seem to disbelieve the science behind global warming, a bunch of scientists come along and point out that vast swaths of the social sciences don't stand up to scrutiny. They don't replicate—which is to say, if someone else does the same experiment, they get different (often contradictory) results.
Researchers are trying to fix the problem. They're encouraging more sharing of data sets and urging each other to preregister their hypotheses. The idea is to cut down on the statistical shenanigans and memory-holing of negative results that got the field into this mess.
And self-appointed teams are even going back through old work, manually, to see what holds up and what doesn't. That means doing the same experiment again, or trying to expand it to see if the effect generalizes. To the Defense Advanced Research Projects Agency, the Pentagon's mad-science wing, the problem demands an obvious solution: Robots. .... "
Sunday, February 17, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment