Like virtually every other tech-related industry right now, digital banking is awash in AI enthusiasm. As this enthusiasm hits fever pitch, many fintechs feel pressured to show some innovation that indicates they’re striding confidently into the AI future—whether or not they actually are. That has brought us to an odd industry-wide spot, one in which many companies are making big, hype-worthy announcements about AI, but not actually delivering usable AI-powered tools.
What explains the gap between the level of enthusiasm and the extent of implementation? AI promises to make fintechs smarter, faster, more efficient, more profitable, and more compliant. These promises, however, aren’t translating into action. There is a distinct lack of clear regulation for AI in the financial sector and, more specifically, as it relates to anti-money laundering (AML) compliance. That’s a problem for fintechs, which are bound to AML compliance by nature.
Rules-based regulation keeps innovation on hold
Rule-based thinking dominates the current landscape of AI financial regulation, particularly as it relates to AML measures. That’s an active hindrance to widespread AI adoption. Examples are close at hand. Take Revolut, one of the leading European neobanks actively pushing the boundaries of innovation. Despite their forward-looking approach, regulatory scrutiny and fines have made vertical AI implementation especially complex and resource-intensive. This highlights the broader challenge – even companies with significant resources and strong intentions face barriers when aligning AI efforts with evolving compliance standards.
This case underscores the importance of adapting regulatory frameworks to match the pace of innovation. It shows that even the most capable players are slowed down by outdated or overly prescriptive rules.
Following this case, many other fintechs don’t want to risk running afoul of AML compliance—completely inadvertently, and as a result of implementing AI tools. Understandably, some opt for caution, choosing to delay adoption until clearer regulatory signals emerge.
Overcoming this regulatory hurdle will require rethinking the very basis of regulation. The ideal approach is not rules-based but principles-based and outcome-oriented. This would allow institutions to adopt AI responsibly while still meeting compliance goals.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataNew ways of thinking for new use cases
We already have significant evidence that AI systems can be fair, effective, and explainable. These systems are also capable of transparent reporting and auditability. Implemented alongside strong governance frameworks, there’s a strong basis for responsible adoption.
To that end, AI’s potential beneficial use cases are also evident. AI’s AML-related applicability extends from generating client dossiers with cohesive risk analysis to filing SARs/STRs internally and monitoring payments in real-time and retroactively.
Agentic AI is useful for threat detection, minimizing false positives, and creating richer profiles using advanced document intelligence. It’s increasingly risky not to leverage these innovations – especially when financial criminals are already doing so.
From firsthand experience, the practical application of AI in AML operations has already yielded highly positive and tangible results. Even in simple tasks like data enrichment and document classification, AI has improved speed and consistency. But more importantly, we’re now moving deeper into sophisticated use cases that require large-scale data analysis and advanced reasoning – precisely the kind of work where AI proves invaluable.
While large players face structural complexity, smaller fintechs with fewer legacy systems may benefit from agility. Their leaner, faster decision-making structures allow AI to be integrated directly into specific processes – such as AML – without requiring full-system overhauls. This agility lets such companies adopt new technology faster while maintaining strong governance. Integrating incrementally and maintaining a clear audit trail in line with regulatory measures is already possible for some fintechs. But until all the regulatory hurdles are cleared, unnecessary difficulties in integration will continue.
Progress via tiered, risk-based regulation
Tiered regulation designed with a risk-based approach in mind is the way forward. Under a risk-based approach, low-risk and high-volume alerts (such as known false positives) could be fully automated. High-risk and ambiguous cases would involve human review. Without replacing human analysts in judgement-based areas, AI would make fintech processes more efficient and less faulty as promised. Human involvement would remain central, scaling with the impact and complexity of any given decision.
Some countries are closer to achieving modernised AI finance sector regulation than others. One notable example: the EU’s proposed AI Act, which includes a number of AML use cases in high-risk categories and clearly outlines accountability expectations. The regulation is cautious, but nevertheless provides a workable, relevant framework for compliant AI use today.
Making the case for modernised regulation
Everything AI can do for fintech is necessarily downstream from compliance. Urging regulatory bodies to replace outdated rules with the kind of intelligent, tiered regulation the new financial landscape requires will be a group effort, and not one exclusive to upstart fintechs. It will involve the entire financial industry, particularly establishment banks.
Active collaboration is essential to encouraging progressive regulation. Industry players need to be in conversation with regulators through sandboxes, working groups, and the transparent sharing of real-world AI use cases that demonstrate improved AML outcomes.
Regulators can’t modernise without the most up-to-date understanding of how AI is being used, and they can’t identify roadblocks without the experiential knowledge that fintechs have. Clear standards for explainability, governance, and bias mitigation will build trust between the two camps. The financial sector needs to take an active role in educating policymakers on AI’s capabilities and limits as it relates to informed decision-making and emphasize the importance of continued human involvement.
The financial industry is global, and AI regulation needs to be globally minded. Cross-border alignment and public-private partnerships will be essential to the creation of consistent frameworks that are simultaneously innovation- and compliance-friendly. For this to happen, the global banks that move international markets must be part of advocating for this change.
Rules-based AI regulation -no longer viable for an increasingly AI-powered financial sector
It’s not only hindering innovation but preventing companies from taking full advantage of the innovations that AI already offers. There’s nothing mysterious about how to modernise regulation or the benefits of doing so. Once the financial sector finds the will to collaborate with regulators in meaningful ways, regulation will change, and the promised benefits will be felt everywhere.
Ignas Dovidonis is the AML/CTF (Anti-Money Laundering and Counter-Terrorism Financing) Compliance Officer at digital bank myTU