Faculty of law blogs / UNIVERSITY OF OXFORD

When Artificial Intelligence Buys the Wrong Thing: Autonomy, Consent, and Liability Gaps in Payment Law

Posted:

Time to read:

4 Minutes

Author(s):

Frances Coyle
Director, Turtle Law

AI assistants capable of purchasing goods and services on behalf of consumers are rapidly entering everyday commerce. From subscription renewals to dynamic pricing decisions, these systems increasingly act with a degree of autonomy that challenges legal frameworks built around human decision-makers. When an AI system buys the wrong item, existing payment law must determine where responsibility lies.

This post examines how UK payment law allocates liability when AI systems initiate transactions, focusing on consent, strong customer authentication (SCA), authorised push-payment (APP) fraud, and the distinction between credit and non-credit payment rails. It then contrasts the UK position with emerging EU product-liability reforms and highlights a growing accountability gap for AI mediated payments.

Consent, gross negligence, and SCA

Under the UK Payment Services Regulations 2017 (PSRs), a payment transaction is authorised only if the payer has consented in the form agreed with their payment service provider. In practice, this consent is usually given at the point of authentication (via a PIN, biometric verification, or equivalent), regardless of whether an AI system has selected the goods, populated the basket, or pre-filled payment details.

Once the consumer authenticates the transaction, they are ordinarily bound, unless the payment itself is unauthorised or the product is defective. The PSRs require consumers to take reasonable steps to keep personalised security credentials safe, while placing the burden on payment service providers (PSPs) to prove that authentication was properly applied, the transaction correctly executed, and any allegation of fraud or gross negligence substantiated.

Where a transaction is unauthorised, PSPs must refund the consumer unless they can show fraud or gross negligence. They must also make reasonable efforts to recover funds where a payment has been misdirected due to account identifier errors. Gross negligence, however, is highly context specific. Financial Ombudsman Service decisions, for example DRN‑4898289show that providing unrestricted access to a card and PIN may render transactions authorised, yet complaints are frequently upheld where patterns suggest fraud rather than genuine consent.

SCA provides an additional safeguard. Where SCA was required but not applied, PSPs generally cannot demonstrate proper authentication and must refund the consumer. However, this framework assumes that a human exercises judgment at the moment of authorisation, an assumption that becomes strained when AI systems mediate or automate decision-making.

AI and authorised push-payment fraud

APP fraud illustrates the limits of a consent-based model. The reimbursement regime responds to cases in which consumers are deceived into authorising payments they would not otherwise have made. Although the transaction is formally authorised, policy intervenes to reallocate loss.

AI-mediated payments complicate this logic. If an AI agent is deceived into authorising a payment to a fraudster, the transaction may remain legally authorised, yet the consumer has exercised no meaningful judgment at the point of execution. Concepts such as deception, reasonable care, and gross negligence are calibrated to human behaviour and fit awkwardly where loss arises from AI conduct.

Absent reform, losses caused by AI-driven APP-style fraud risk falling into a liability gap between consumer protection, payment law, and product responsibility, leaving consumers or PSPs to absorb losses generated by autonomous systems.

Payment rails and credit protections

UK payment law allocates risk differently depending on the payment rail used. AI systems merely transmit instructions into these frameworks, but outcomes vary significantly.

For non-credit payments, such as bank transfers, direct debits, and most debit-card transactions, consumers fund the payment upfront and must pursue redress through the Direct Debit Guarantee, chargeback schemes, contract law, or complaints processes.

Credit-card transactions differ materially. When a transaction is disputed, issuers commonly suspend or reverse the amount and halt interest accrual while investigating. As a result, issuers rather than consumers bear the short-term financial risk of AI-mediated errors on credit rails, while consumers face greater exposure for equivalent mistakes made via non-credit methods.

Product liability: EU reform and UK gaps

At the product-liability level, divergence between the EU and UK is widening. The EU’s revised Product Liability Directive extends strict liability to software and AI systems and introduces rebuttable presumptions of defect and causation in certain circumstances. Although pure economic loss, such as funds lost solely through a payment error, remains outside its scope, the Directive significantly expands potential liability by treating AI systems as products and lowering evidentiary burdens for claimants. Member States must implement the Directive by December 2026.

The UK, by contrast, has not adopted an AI-specific liability regime. Strict product liability remains largely confined to defective physical goods and does not generally extend to AI services. While government initiatives signal possible reform, the current landscape remains fragmented.

When AI acts beyond instructions

The most difficult cases arise where AI systems act proactively rather than merely executing instructions. If an AI agent autonomously selects a subscription upgrade, this will usually amount to misperformance rather than fraud. However, if the system presents such an upgrade as execution of the consumer’s request, and the provider knowingly deploys a system that predictably favours higher-revenue outcomes, the representation may be false.

In such cases, liability for fraudulent misrepresentation cannot be ruled out, though these scenarios will be rare and evidence dependent. More commonly, AI-driven mis-selection will fall within negligent or innocent misrepresentation. Representations of neutrality or optimality may be misleading where system design or commercial incentives influence outcomes, even absent dishonesty. Contractual limitations of liability may apply, subject to unfair-terms controls.

In many consumer disputes, AI errors will continue to be treated as failures of performance under the UK Consumer Rights Act 2015, with remedies of repair, repeat performance or price reduction, often inadequate for high-value or irreversible transactions.

Conclusion

The UK legal framework allocates loss through consent, payment rails, and standards such as reasonable care and gross negligence. In AI mediated commerce, this leads to inconsistent outcomes, leaving consumers and PSPs exposed, while AI developers often sit outside strict liability regimes.

Emerging EU reforms point towards a more integrated approach, but still stop short of compensating pure economic loss. Absent further reform, AI enabled purchasing risks becoming an area in which responsibility systematically falls on users and payment providers, rather than on those who design and deploy the systems that make the decisions.

Frances Coyle is the Director of Turtle Law Ltd.