Financial regulators are trailing banks and other financial firms in adopting artificial intelligence, even as concerns mount over how the technology can be monitored safely, according to a new report from the Cambridge Centre for Alternative Finance.

The study found that more than 80% of financial services firms are using AI to some extent. It also said 52% are already experimenting with agentic AI.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

By contrast, regulators remain at an earlier stage.

Of the 130 regulatory authorities surveyed, 48% said they are still exploring AI adoption or are not engaged with AI at all.

The gap comes as oversight challenges become more pressing.

Recent reports that Anthropic’s Mythos model is often more capable than humans in hacking have added to questions over whether manual oversight of AI use in financial services is workable.

The report said software engineering is currently the most mature AI application in finance. At the same time, it described this area as a major channel for cyber risk transmission.

Among respondents, 48% identified adversarial AI as a leading concern.

The study also highlighted a divide in how risks are viewed.

AI vendors were less likely than financial firms and regulators to give priority to adversarial AI threats and cyber and operational resilience.

“Further complicating this problem space is a notable perception gap: AI vendors place less priority than industry and regulators on both adversarial AI threats (35% versus 50% industry, 57% regulators) and cyber/operational resilience (32% versus 46% industry, 59% regulators),” says the 2026 Global AI in Financial Services Report: Adoption, Impact and Risks.

Across all stakeholder groups, data privacy and protection ranked as the top perceived AI risk, cited by 73% of respondents.

“These intersecting vulnerabilities can also feed into the top perceived risk across all stakeholders – data privacy and protection (73% of respondents) as sensitive data is typically the primary target for the cyber exploits these vulnerabilities enable,” the report adds.

Other risks identified in the report include model hallucinations, unreliable outputs, opacity, limited explainability and market abuse.

Despite the rapid pace of adoption, the report said the effects so far have been concentrated on efficiency rather than business model transformation.

It also found that fintechs are ahead of incumbent firms in using AI for customer support.

Meanwhile, 76% of respondents at large financial institutions said they find it difficult to measure the value of AI deployment.

Most organisations surveyed said they are building on external AI models rather than training their own from scratch.

OpenAI was the most widely used foundation model provider across all respondent groups, used by 76% of industry participants and 48% of regulators. Google and Anthropic followed.

Kieran Garvey, lead in AI at the Cambridge Centre for Alternative Finance, said: “What this study shows is a sector in genuine transition. AI is already delivering real efficiency gains – in operations, in software development, in customer-facing services – and more mature adopters are beginning to use it to create entirely new financial products.

“However, the same capabilities driving those gains are also creating or exacerbating risks from model hallucinations and biases, data protection and privacy, lack of explainability, herding, third-party dependency and adversarial threats. How we collectively manage and mitigate these risks will shape the future trajectory of digital financial services.”