/* ---- Google Analytics Code Below */

Tuesday, March 17, 2020

Reproducibility on Machine Learning

In The Gradient  ....

Reproducibility and Machine Learning

Peer review has been an integral part of scientific research for more than 300 years. But even before peer review was introduced, reproducibility was a primary component of the scientific method. One of the first reproducible experiments was presented by Jabir Ibn Haiyan in 800 CE. In the past few decades, many domains have encountered high profile cases of non-reproducible results. The American Psychological Association has struggled with authors failing to make data available. A 2011 study found that only 6% of medical studies could be fully reproduced. In 2016, a survey of researchers from many disciplines found that most had failed to reproduce one of their previous papers. Now, we hear warnings that Artificial Intelligence (AI) and Machine Learning (ML) face their own reproducibility crises.

This leads us to ask: is it true? It would seem hard to believe, as ML permeates every smart-device and intervenes evermore in our daily lives. From helpful hints on how to act like a polite human over email, to Elon Musk’s promise of self-driving cars next year, it seems like machine learning is indeed reproducible.

How reproducible is the latest ML research, and can we begin to quantify what impacts its reproducibility? This question served as motivation for my NeurIPS 2019 paper. Based on a combination of masochism and stubbornness, over the past eight years I have attempted to implement various ML algorithms from scratch. This has resulted in a ML library called JSAT. My investigation in reproducible ML has also relied on personal notes and records hosted on Mendeley and Github. With these data, and clearly no instinct for preserving my own sanity, I set out to quantify and verify reproducibility! As I soon learned, I would be engaging in meta-science, the study of science itself. ... " .....'

No comments: