Recent input from the Financial Conduct Authority (FCA) has reinforced that firms using AI must continue to display strong outcomes within financial crime compliance programmes. The regulator clearly states that innovation is heavily encouraged, but it should not be at the expense of effective anti-money laundering controls.
As AI becomes more widely adopted in anti-money laundering (AML) processes, firms are still under increasing pressure to show that their outcomes are accurate, risk-based, and explainable under FCA supervision.
The FCA focuses on how it can regulate the outcome, rather than the technology used to obtain it. Financial firms have already begun integrating AI within their financial crime compliance operations, understanding that the regulatory expectation remains the same: all organisations must have the ability to show evidence that led to their final decision.
This becomes increasingly important when firms begin implementing agentic AI and automated decisions. Auditability and transparency in risk-based decision-making are the foundation of the outcomes-based approach.
Testing for improved fincrime outcomes
The specifics of testing across AI for AML differ by model type, but the core principles remain the same.
There are different error types to address. Types 1 (false positives) and 2 (false negatives) are relatively well understood by most teams, but where there needs to be more focus is Type 3: where the underlying reasoning is flawed, even if the result is correct. This could be a true positive result, but that is a confusing correlation of irrelevant data points for causation.
There are similar issues with outputs from language models, although these are usually not as cleanly identified. This is important because while the system may appear sound in the isolation of a testing environment, once pushed live it is creating alerts based on the wrong vectors. Without a correction in the testing phase, these errors reach production and become compounded as models continue to learn on the incorrect underlying assumptions.
Validating AI outputs for fincrime compliance
Whether relying on an in-house data scientist or third-party experts at a partner, trust is important for understanding the source of data, validation, and testing processes, including the underlying test data itself.
Model results should be tested against institutional knowledge regularly. This is because if an experienced analyst manages to identify risks that the system does not highlight, it indicates that the AI may need to be re-adjusted. Checks should focus on finding both false positives and false negatives within the AI results.
Model validation cannot be solely dependent on data scientists. Organisations should expect good tension between experts in AI and financial crime compliance professionals to understand if the AI results align with institutional expertise.
Consider it a good litmus test for outcomes-based approaches. Compliance users will need an explanation that is clear so that they can understand the reasoning as to why a result was produced, and to confirm that the AI system supports the intended outcome.
The future of AI for AML
While the FCA actively encourages the use of AI for AML, it does so with clear guardrails. For AI to be effective in financial crime compliance, it must be implemented with a compliance-first mindset. This means managing errors and embedding explainable, auditable AI into compliance workflows from sandbox through to production.
We have seen the first instances of regulatory rebuke emerge from failures in AI validation and explainability: the Federal Court of Australia just presided over an AML compliance judgement for the Australian Securities and Investments Commission, in which it clearly cautioned on the use of LLMs in summarising large documents without human judgement to validate the outcomes.
The judgement focused specifically on the use of AI to navigate material provided to the legally responsible humans-in-the-loop. It serves as an excellent caution, providing clear recommendations on how to avoid similar failures.
How to collaborate with the FCA and other regulators
Regulators are increasingly providing opportunities to access testing environments and compute power to accelerate model validation and model design techniques, as well as new agentic approaches, with a view to accelerating the implementation of AI for AML.
The FCA Supercharged Sandbox is a great example. These kinds of initiatives are often open to all regulated entities, and organisations only need to apply for access. Preparing for the access window once granted is key to making the most of the opportunity. Joining any existing working groups and attending current sprints or presentation days is a good first step.
Proactive engagement with the regulator is a core pillar of a compliance-first approach to AML and should be prioritised by fincrime teams looking to improve their AML outcomes.
Collaboration with regulators plays a key role in ensuring that AI-driven AML outcomes remain aligned with FCA expectations around transparency, validation, and accountability.
Dr Janet Bastiman, Chief Data Scientist at Napier AI
