The UK government is looking to introduce a common testing regime for general purpose AI systems used by UK lenders, after the Bank of England raised concerns last year about how such models are being assessed, reported the Financial Times (FT).

The idea was put to the Department for Science, Innovation and Technology last month by Starling Bank chief information officer Harriet Rees.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

Rees is currently serving as the government’s financial services AI “champion” and is co-chair of the BoE’s AI task force.

She said: “Lots of firms are using [AI models] and we can assume that [they] have done the necessary due diligence and, therefore, hopefully we’re happy. But we’ve not done that independent assessment.”

The proposal is intended to limit repeated work across firms, bring greater consistency to testing and make sure algorithms developed in the US meet the necessary benchmark.

The discussion follows two AI meetings held in October by the BoE’s Prudential Regulation Authority, the body that supervises lenders, where banks were told that AI model monitoring was “not frequent enough”, according to presentation slides from the sessions.

In a statement to FT, she added: “Given our reliance on US models, it would give [the government] the comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard.”

At present, there is no legal requirement for AI systems to undergo assessment before being deployed in regulated sectors, though banks do carry out their own reviews.

Groups such as OpenAI and Anthropic have voluntarily submitted models including ChatGPT and Claude to the AI Security Institute, the government unit focused on testing advanced AI systems and researching related risks.

Rees said responsibility for examining these general-purpose models should not sit with a single sector regulator, arguing that their use extends well beyond financial services.

She said AISI was the “most obvious body” to take on such a role.

She added that the proposal had been received positively by the director-general for AI, Ollie Ilott, who founded AISI, during a meeting in early March. “They agreed that there was nothing else out there like this today,” she said.

A government spokesperson nevertheless signalled that AISI was unlikely to be given that task.

“The AI Security Institute is focused on frontier AI security research, and we are not exploring expanding its remit into assurance or any testing of third-party AI models,” the spokesperson said.

Rees, who has previously raised concerns about corporate dependence on US technology, said oversight by an independent body would not replace lenders’ own checks.

Instead, she said, it would serve as a “fail-safe” and offer reassurance about what lies under the bonnet.

The BoE declined to comment.