Science requires enough information to reproduce a result that is claimed. But does not mean that a result cannot be consistently useful even if not formally 'reproduced', And what then does 'reproducing' mean? It will not be exactly the same, but what has been changed or left out?
Notable is a reproducability checklist from McGill University: Which appears to be a useful list of things to be considered. Or at least a starting point for understanding the question. Below a good article on the topic:
Artificial Intelligence Confronts a 'Reproducibility' Crisis from Wired
Machine-learning systems are black boxes even to the researchers that build them. That makes it hard for others to assess the results.
A few years ago, Joelle Pineau, a computer science professor at McGill, was helping her students design a new algorithm when they fell into a rut. Her lab studies reinforcement learning, a type of artificial intelligence that’s used, among other things, to help virtual characters (“half cheetah” and “ant” are popular) teach themselves how to move about in virtual worlds. It’s a prerequisite to building autonomous robots and cars. Pineau’s students hoped to improve on another lab’s system. But first they had to rebuild it, and their design, for reasons unknown, was falling short of its promised results. Until, that is, the students tried some “creative manipulations” that didn’t appear in the other lab’s paper. .... "
Monday, September 16, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment