DWP reveals bias in AI fraud detector system by ‘fairness analysis’

Information on people’s age and disability, marital status and country of origin influences decisions to investigate fraud in benefit claims, but the Department for Work and Pensions states that there are “no immediate concerns about unfair treatment”

By

  • Sebastian Klovig Skelton,Data & ethics editor

Published: 10 Dec 2024 12:30

According to the Department for Work and Pensions’ own internal assessment, an artificial intelligence (AI), used to detect welfare fraud, shows “statistically significant disparities” related to people’s nationality, marital status, disability and age. The 11-page “fairness” analysis, released under Freedom of Information (FoI), found that a machine-learning (ML) system used to vet thousands universal credit benefit payments by the DWP is selecting people more from certain groups than others when recommending who to investigate for fraud. The assessment, which was carried out in February 2024 and showed a “statistically important referral… and outcome disparity” for all protected characteristics analysed, including people’s age and disability, marital and nationality status, was conducted. It stated that a review of the disparities found “the identified differences do not translate into any immediate concerns about discrimination or unfair treatments of individuals or protected groups”. They added that there were safeguards in place to minimize any potentially harmful impact on legitimate claimants.

It said that “this includes no automated decision making”noting that it is always a person [who] who makes the decision after considering all of the available information.

The DWP added that, while protected characteristics like race, sex and sexual orientation, religious belief, etc. were not analysed in the fairness analysis as part of the DWP’s “no immediate concern of unfair treatment”because the safeguards are applied to all customers. It plans to “iterate, improve and refine” the analysis method. Further assessments will be completed every quarter.

The report said that the assessment would include a decision and recommendation on whether or not it is reasonable and proportionate to keep operating the model.

Caroline Selman is a senior researcher at the Public Law Project. She told the Guardian that “it is clear that the DWP failed to assess whether its automated processes were likely to unfairly target marginalised groups in a large majority of cases.” DWP should stop this ‘hurt-first, fix-later’ approach and stop rolling out new tools when they are not able to understand the risks of harm that they represent.

Due to redactions, it is currently not clear from the released analysis which age groups are most likely to be wrongly selected for fraud checks by the AI system or how nationalities are handled by the algorithm.

Similarly, it is unclear whether disabled people will be more or less likely than non-disabled individuals to be wrongly targeted for investigation by the AI system. Officials said that this was done to prevent people from gaming the system. However, the analysis itself stated that any referral disparity based on age (especially for those over 25) or disability is expected because people with these protected traits are already linked to higher rates of universal credit payments. A DWP spokesperson responded to the Guardian reportby saying: “Our AI tool doesn’t replace human judgment, and a Caseworker will always consider all available information before making a decision.” Our fraud and error bill allows us to take bold and decisive actions to combat benefit fraud. It will allow more efficient and effective investigations in order to identify criminals who are exploiting the benefits systems faster. Computer Weekly contacted DWP to ask if it had assessed the dynamics of automation bias in the AI system and if this was the case, how this affects referral and outcome disparities. However, no response was received by the time of publication.

In recent months, the role of AI and automation within welfare systems has been under scrutiny. Amnesty International, for instance, found in November 2024 that Denmark’s automated social welfare system created a barrier for certain marginalised group members, such as people with disabilities, individuals with low incomes and migrants, to access social benefits.

In the same month, a report by Lighthouse Reports ( ) and Svenska Dagbladet (]revealed that Sweden’s algorithmically-powered welfare system targets marginalised groups disproportionately for benefit fraud investigations.

Read more about Artificial Intelligence, Automation and Robotics

  • By Chris Holmes

    By Sebastian Klovig Skelton.

  • By Sebastian Klovig Skelton.
  • ByJosh Osman.

  • Read More

    More from this stream

    Recomended


    Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464