An artificial intelligence system used by the UK government to detect welfare fraud has been found to show bias based on factors such as age, disability, marital status, and nationality, an internal assessment has revealed. The system, which helps vet universal credit claims, was flagged for incorrectly selecting certain groups for investigation more than others, with a “statistically significant outcome disparity” identified in a fairness analysis carried out by the Department for Work and Pensions (DWP) in February.
Despite earlier reassurances from the DWP that the system posed no immediate risks of discrimination, the new findings suggest the automated tool may be unfairly targeting vulnerable groups. The final decision on welfare payments remains in the hands of human caseworkers, but the use of the AI system, aimed at tackling an estimated £8bn lost to fraud and error, has sparked concerns.
The fairness analysis did not explore potential racial, gender, or sexual orientation biases, and critics have called for more transparency regarding the algorithm’s impact on marginalized groups. Campaigners argue that the government has adopted a “hurt first, fix later” approach, using automated systems without fully assessing the risks of harm.
The revelation comes amid growing scrutiny of AI systems in the public sector, with calls for greater oversight and accountability in their use, especially in sensitive areas like welfare.