Mind the Gap: Securing Algorithmic Explainability for Credit Decisions Beyond the UK GDPR
Posted:
Time to read:
Credit scoring is increasingly automated, but UK consumers generally have no entitlement to reasons for a rejected application. Until recently, data protection law offered an important backstop for ‘solely automated’ decisions. The Data (Use and Access) Act 2025 (DUAA) rewires those safeguards.
For years, lawyers debated whether the GDPR gives individuals a meaningful right to understand automated decisions about them. In early 2025, that debate largely ended. In Case C-203/22 Dun & Bradstreet Austria, the Court of Justice of the European Union confirmed that individuals are entitled to a genuine explanation of the logic and results of automated decisions. Yet, just as this ‘right to explanation’ was strengthened, the UK moved in the opposite direction. The DUAA, which received Royal Assent on 19 June 2025, repeals Article 22 of the UK GDPR and replaces it with a more permissive framework (new Articles 22A-D).
My article, Mind the Gap: Securing Algorithmic Explainability for Credit Decisions Beyond the UK GDPR, uses consumer credit as a case study to show why this divergence matters and why reliance on data protection law alone can no longer secure algorithmic accountability in the UK.
The DUAA’s three structural gaps
The DUAA presents the amendments to Article 22 as a simplification designed to support innovation. In reality, the reforms weaken critical protections and creates unacceptable risks for individuals subject to AI-informed decisions.
1. The loophole of ‘no meaningful human involvement’
The new UK definition of a ‘solely automated’ decision is one where there is ‘no meaningful human involvement in the taking of the decision’. The Explanatory Notes state this term may later be clarified by secondary legislation to provide legal clarity ‘in light of constantly emerging technologies’.
However, the absence of a definition is precisely the problem. There is a considerable risk that this creates a clear incentive for firms to introduce tokenistic human oversight as a procedural shield to evade accountability. Research shows that even well-intentioned human oversight mechanisms are no guarantee of less biased or more responsible outcomes. Human-in-the-loop systems often reinforce, rather than reduce, automation bias.
2. Restricting safeguards to ‘special category data’ misunderstands discrimination
The DUAA fundamentally changes the application of Article 22 by repealing the general prohibition and limiting the strongest protections to decisions ‘based entirely or partly on the processing of special category data’. Automated processing of other personal data will now be permitted by default.
By restricting key protections involving only ‘special category data’, the new legislation ignores discrimination arising from non-sensitive proxy data, a common source of algorithmic discrimination. The Government even conceded that the reforms are ‘likely to increase the level of Article 22 processing’, and that the increase ‘could potentially lead to discrimination’, particularly from private organisations. It deemed such risk ‘justifiable and proportionate, given the legitimate aim of ensuring the economic wellbeing of the country’.
3. A shift from substantive constraints to procedural safeguards
The Government contends that new Article 22C introduces ‘comprehensive safeguards, resulting in increased transparency and accountability. Data controllers are required to implement safeguards that inform individuals about significant automated decisions, allow individuals to make representations, permit individuals to obtain human intervention and enable contestation of decisions. While the safeguards are welcome, they do not introduce new substantive constraints compared to the original framework. An issue raised repeatedly in House of Lords debate, people may get general information when they actually need a much more personalised explanation to exercise their rights. The DUAA’s framework relies on process rather than outcome, substituting meaningful explainability for administrative formality.
A better route: a technology-neutral right to reasons
If the UK is stepping away from Article 22-style prohibitions, where should explainability live? The answer, I argue, lies not in data protection law but in consumer protection and domain-specific regulation. Specifically, the UK could adopt a technology-neutral ‘right to reasons’ for adverse credit decisions, regardless of whether the decision was made by a human, assisted by AI, or made in a totally automated system.
Such approach exists in the US adverse action notice model, which requires creditors to provide specific reasons for adverse decisions. The duty not only empowers consumers but also deters discrimination ex ante. If creditors know they must explain their decisions, they will be discouraged from discriminatory practices.
While the EU is also considering similar simplifications of the GDPR Article 22 in the Digital Omnibus Regulation Proposal, it is still strengthening sector-specific safeguards. The Consumer Credit Directive now imports rights from the GDPR for automated creditworthiness assessments where lenders must give a ‘clear and comprehensible explanation of the assessment of creditworthiness, including on the logic and risks involved in the automated processing of personal data as well as its significance and effects on the decision.
Conclusion
Similar protections in the UK, coupled with appropriate sector-specific regulation and supervision, would shift the burden from the consumer to the firm and require systems that are explainable by design. As data and AI regulation evolve, the DUAA removes a critical protection just as algorithmic credit decision-making becomes more complex and consequential.
If the UK is serious about responsible AI in high-stakes sectors, it should stop treating explanations as narrow data privacy questions and recognise them as a core element of consumer protection and market fairness. A technology-neutral right to reasons for adverse credit decisions offers a clearer and more effective route to closing the post-DUAA accountability gap.
The full article can be accessed here.
Holli Sargeant is a Research Fellow in Law at St John’s College, Cambridge.
OBLB types:
Jurisdiction:
Share: