It has been revealed that an automated system that screens welfare claims for evidence of fraud or mistake made decisions in part based on applicants’ ages, prompting calls for a reconsideration of the system’s legality.
Xantura, a UK technology company that offers “risk-based verification” to about 80 councils and has evaluated hundreds of thousands of claimants, previously stated that it does not input its algorithm any data protected by anti-discrimination legislation.
However, Wajid Shafiq, the company’s chief executive, has now confirmed that it considers a claimant’s age, which is a protected feature under the 2010 Equality Act. That implies that treating someone less favourably on the basis of that feature constitutes direct discrimination and may be unlawful.
The Guardian reports that Big Brother Watch (BBW), a civil rights advocacy group, received a trove of documents under the Freedom of Information Act that shed light on how Xantura’s technology functioned.
The records detail how the automated screening system flags higher-risk housing and council tax benefit claimants for more scrutiny, which can result in delays in decision-making. Additionally, it expedites applications for ostensibly low-risk individuals.
Xantura maintains that their use of age reduces fraud and error, speeds up the majority of applications, and does not violate the Equality Act, citing a legal exemption that allows financial services companies to account for age. However, activists are pushing for increased oversight.
Additionally, the records given to BBW revealed that Xantura processed data on people’s residences, ethnic origins, gender, and family status.
Although gender and race are protected factors, Shafiq stated that “apart from age, it doesn’t use any other protected characteristic.”
He stated that information regarding neighbourhoods and sex was only utilised to determine if the system was biassed after judgments were made.
He declined to specify what further personal information is fed into the algorithm, citing concerns that it may enable claimants to somehow trick the system, but said that information supplied by claimants could be utilised to avoid fraud and mistake.
When asked whether the algorithm forecasts that older or younger persons are more likely to conduct fraud or mistake, he responded: “[It is] not that simple. It is a multi-variate model, so various combinations of risk factors need to exist to generate fraud or claims that are in error.”
He had earlier stated that the RBV model does not make use of protected characteristics when screening welfare claims.
He added:“There is a duty to prevent fraud and error. If local authorities decide we shouldn’t be using age in the modelling process we can take it out.”
Xantura is just one of numerous firms that are assisting in the automation of the benefits system, but the inner workings of these “welfare robots” remain cloaked in mystery.
Claimants are not informed that their applications are being subjected to algorithmic decision-making.
According to papers obtained by BBW, Xantura stated in a private 2012 “model overview” that variables “deemed statistically significant” include the sort of region in which a person resides, as described in broad categories that represent ethnic composition.
At the time, these Office for National Statistics-defined categories included “ethnicity central,” which refers to areas with a higher proportion of non-white residents than the national average, “especially people of mixed ethnicity or who are black.”
Jake Hurfurt, head of research and investigations at BBW, told the Guardian: “Dozens of councils have waived through RBV policies without considering the real risk of discrimination the algorithms pose and nobody has a clue the damage these algorithms could do because bias and disproportionality are not monitored.”