/* ---- Google Analytics Code Below */

Saturday, April 03, 2021

Data Labeling Errors

Make sure the the data and metadata is correct, in context.

MIT Study Finds 'Systematic' Labeling Errors in Popular AI Benchmark Datasets

in VentureBeat, Kyle Wiggers, March 28, 2021

An analysis by Massachusetts Institute of Technology (MIT) researchers demonstrated the susceptibility of popular open source artificial intelligence benchmark datasets to labeling errors. The team investigated 10 test sets from datasets, including the ImageNet database, to find an average of 3.4% errors across all datasets. The MIT investigators calculated that the Google-maintained QuickDraw database of 50 million drawings had the most errors in its test set, at 10.12% of all labels. The researchers said these mislabelings make the benchmark results from the test sets unstable. The authors wrote, "Traditionally, machine learning practitioners choose which model to deploy based on test accuracy—our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets."

See also from MIT:   https://labelerrors.com/  

No comments: