Amnesty International says the Dutch government risks exacerbating racism through its continued use of unregulated algorithms in the public sector, in a damning new analysis of the country's childcare benefits scandal. Ta.
The report 'The Xenophobia Machine' finds racial profiling in the design of algorithmic systems used to determine whether claims for childcare benefits are flagged as inaccurate and potentially fraudulent. It reveals how it was incorporated. As a result, tens of thousands of parents and caregivers, mainly from low-income households, have been unfairly accused of fraud by the Dutch tax authorities, with ethnic minorities disproportionately affected. The scandal brought down the Dutch government in January, but despite multiple investigations, not enough lessons have been learned.
Governments around the world are rushing to automate the delivery of public services, but society's most marginalized people are paying the highest price.
Merrell Corning, Senior Advisor on Technology and Human Rights
“Thousands of lives have been ruined by a disgraceful process involving xenophobic algorithms based on racial profiling. The Dutch authorities risk repeating these catastrophic mistakes, as human rights protections in the country continue to be lacking.”
“Surprisingly, the Dutch are not alone. Governments around the world are rushing to automate the delivery of public services, but it is the most marginalized people in society who are paying the highest price. ”
Amnesty International is calling on all governments to immediately ban the use of nationality and ethnicity data in risk scoring for law enforcement purposes to investigate potential criminal and fraud suspects.
Thousands of lives have been ruined by a shameful process involving xenophobic algorithms based on racial profiling.
Merrell Corning
discrimination loop
Racial and ethnic discrimination has been at the heart of the design of the algorithmic system introduced by the Dutch tax authorities in 2013 to detect potential child benefit claims and fraud from the outset. The tax authorities used information on whether the applicant had Dutch nationality as a risk factor, and non-Dutch nationals obtained higher risk scores.
Parents and caregivers selected by the system had their benefits terminated and were subject to hostile investigations characterized by strict rules and policies, harsh interpretations of the law, and ruthless benefit recovery policies. This created widespread and devastating financial problems for affected families. From debt and unemployment to forced eviction due to inability to pay rent or mortgage. Some have had mental health problems and strained relationships, leading to divorce and family breakdown.
The algorithm's design reinforced existing institutional biases regarding the association between race and ethnicity and crime, as well as generalized behavior across racial and ethnic groups.
These discriminatory design flaws were reproduced by a self-learning mechanism that means the algorithm adapts over time based on experience, without meaningful human oversight. The result was a discriminatory loop in which non-Dutch nationals were flagged as potentially committing fraud more frequently than Dutch nationals.
lack of accountability
If an individual was flagged as a risk of fraud, civil servants were required to conduct a manual review, but were given no information as to why the system produced a higher risk score. . In such opaque “black box” systems, system inputs and calculations were invisible, leading to a lack of accountability and oversight.
“The black box system created a black hole of liability, with the Dutch tax authorities relying on algorithms to help make decisions without proper oversight.”” said Merrell Corning.
Tax authorities had a perverse incentive to seize as much money as possible, regardless of the veracity of fraud accusations, as they needed to prove the efficiency of their algorithmic decision-making systems. Parents and carers identified as fraudsters by the tax authorities have been given no answers for years to the question of what they did wrong.
The findings of the Xenophobia Machine will be presented at a side event to the United Nations General Assembly on Algorithmic Discrimination on October 26th. This year, Amnesty International is launching the Algorithmic Accountability Lab, a multidisciplinary team tasked with researching and campaigning on the human rights risks of automated decision-making systems in the public sector. Amnesty International is calling on governments to:
Preventing human rights violations related to the use of algorithmic decision-making systems. This includes conducting mandatory and binding human rights impact assessments before the use of such systems. Establish effective monitoring and oversight mechanisms for algorithmic systems in the public sector. Be responsible for violations and provide accountable and effective remedies to individuals and groups whose rights have been violated. We discontinue the use of black-box systems and self-learning algorithms when decisions could significantly impact the rights of individuals.
Source link