Regulation 0f algorithms causes Dutch Tax Authority to fail. An unusual event. Fairness and impacts.
The Dutch Tax Authority Was Felled by AI—What Comes Next? European regulation hopes to rein in ill-behaving algorithms by RAHUL RAO in Spectrum IEEE
Until recently, it wasn’t possible to say that AI had a hand in forcing a government to resign. But that’s precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.
When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority’s workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.
In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.
“When there is disparate impact, there needs to be societal discussion around this, whether this is fair. We need to define what ‘fair’ is,” says Yong Suk Lee, a professor of technology, economy, and global affairs at the University of Notre Dame, in the United States. “But that process did not exist.”
Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.
“The performance of the model, of the algorithm, needs to be transparent or published by different groups,” says Lee. That includes things like what the model’s accuracy rate is like, he adds. .... '
No comments:
Post a Comment