The Financial Conduct Authority has picked eight companies, among them Barclays, Experian, Lloyds Banking Group’s Scottish Widows and UBS, for a programme that will trial AI applications in live conditions. 

The regulator is running the AI Live Testing scheme with Advai, a London-based company focused on automated AI assurance.  

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

The programme is intended to help participating firms examine issues such as risk controls and live oversight as they prepare to use AI in services affecting consumers and financial markets. 

The applications showed how quickly the field is changing, covering a wide mix of AI approaches. These ranged from agentic AI and small language models to newer systems such as neurosymbolic AI. 

In the second cohort, firms are assessing both consumer and business-to-business uses. These include AI-supported investment guidance, credit score information for consumers, agentic payments, anti-money laundering checks and Know Your Customer functions. 

Applications for the second round opened in January 2026 and testing started in April.  

The work is scheduled to finish by the end of the year, followed by an assessment report in the first quarter of 2027. 

Alongside Barclays, Lloyds and UBS, the other firms selected are Experian Plc, Aereve, Coadjute, GoCardless and Palindrome. The FCA said the trials would take place in a controlled live-market setting with real customers involved. 

An earlier group, which included NatWest Group, Monzo Bank and Scottish Widows, tested AI applications for roughly six months. 

The FCA said applications to its Regulatory Sandbox and Innovation Pathways were up 49% on the previous year. The report also found that activity in the fintech market broadly aligned with demand for the regulator’s innovation services, especially in areas such as AI. 

The regulator said it would publish a report later in 2026 setting out examples of good and poor practice in the use of AI in financial services. 

Separately, UK regulators, the Bank of England, the government and the National Cyber Security Centre are in close discussions over possible risks linked to Anthropic’s unreleased AI model Mythos, reported Bloomberg citing sources. 

Earlier this month, US Treasury Secretary Scott Bessent called Wall Street executives to an urgent meeting to ensure they understood possible future risks from the technology. 

FCA chief data, information and intelligence officer Jessica Rusu commented: “We’re continuing to collaborate with firms to support the safe and responsible development of AI in UK financial markets. 

“With tailored support from the FCA and Advai, the initiative reflects our commitment to supporting the pace of change in AI, whilst demonstrating how regulators and industry can work together to harness innovation responsibly.” 

Separately, the UK government is looking to introduce a common testing regime for general purpose AI systems used by UK lenders, after the Bank of England raised concerns last year about how such models are being assessed, reported the Financial Times (FT). 

The idea was put to the Department for Science, Innovation and Technology last month by Starling Bank chief information officer Harriet Rees.