The UK’s Financial Conduct Authority (FCA) has cautioned that AI is accelerating the scale and speed of financial crime, posing threats to national security and economic stability.

In his speech at the FCA’s financial crime conference in London on 14 May, FCA CEO Nikhil Rathi warned that “the threat isn’t coming – it’s here”, adding that separation of financial services from national security is “outdated and dangerous”.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

He pointed out that financial crime had become “more organised than ever before”.

“Against this kind of networked threat, we will always be outgunned if we act alone,” he noted.

Rathi argued that traditional approaches to financial crime are no longer sufficient as threats evolve.

To tackle this, the FCA is boosting investment in data, surveillance, and technology.

Enhanced public-private partnerships and shared intelligence platforms are critical pillars to address the threat, stressed Rathi.

He highlighted testing in new payments analytics, which he said identifies money laundering risks faster than previous rule-based systems.

From June, the FCA will expand intelligence sharing with law enforcement, including over 5,000 records via the Police National Database.

According to Rathi, collaboration is vital.

“Done well, private-to-private sharing is one of our most powerful tools,” he said.

He also sent a message to big tech platforms, saying “you cannot sit on the sidelines as online investment fraud continues to rise.”

The FCA’s intelligence infrastructure has processed more than 52 million records linked to financial crime.

Rathi noted the need to prioritise amid growing volume and complexity.

“At these levels, it is simply not possible to chase every single lead,” he said.

Rathi said financial crime groups were increasingly combined fraud, money laundering, and sanctions evasion with cyber-enabled tactics to exploit weaknesses between firms, regulators and systems.

“Criminals don’t see our org charts. They see seams,” he said.

Banks go all-in on AI

The warning comes as banks rapidly adopt AI. Revolut and Starling are launching customer assistants, Santander is testing live end-to-end agentic payments, and Bank of America is intergrating AI into adviser workflows.

Ethical concerns for advanced AI tools

However, advanced tools like Anthropic’s Mythos and ChatGPT 5.5 Instant have come under the scanner.

The head of the Bank of England’s Prudential Regulation Authority, Sam Woods, has warned of “quite significant disruption” to financial services from these new AI tools. At UK Finance’s Growth Delivery Summit, he urged firms to strengthen cyber hygiene and respond faster.

In Washington, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met senior Wall Street executives in April to highlight risks from Anthropic’s Mythos model and similar systems, pressing banks to protect their networks.

Notably, the UK government is considering a shared testing framework for general-purpose AI systems used by lenders, the Financial Times reported last month, after concerns raised by the Bank of England over model evaluation.