AI has become a key tool for financial organisations seeking to achieve greater productivity and efficiency gains. While there is a general understanding of what the technology is and how it works, the ability to access and integrate AI into existing legacy systems poses a big challenge to adoption. The perception is that only hyperscaler technology giants have the resources to build and host advanced AI systems. Now, that assumption is being challenged by the proliferation of Large Language Models that are easy to host, access, and customise. This makes a fundamental shift in the competitive dynamics of AI, creating a levelled playing field for organisations regardless of their size or category.

From a regulatory standpoint, there are important considerations to take into account with any approach to AI integration. The Financial Conduct Authority has openly warned of the risks associated with the use of AI in the financial services sector, particularly when the providers of such technologies are limited to a handful of large players. Given the highly regulated nature of the financial services sector from data privacy through to appropriate compliance measures financial organisations need to consider multiple factors. Additionally, the extension of model risk regulation to all material decision models by the introduction of ss1/23 from the Bank of England creates significant requirements for users of LLMs.

Banks are now at a critical inflection point. One path is to continue relying on the hyperscalers and accept the inherent risks of vendor lock-in. The other is to develop sovereign, specialised, and tightly managed AI capabilities that could reshape their competitive positioning for the next decade, taking advantage of LLMs

The limitations of hyperscaler-only approaches

Banks have typically accessed AI through hyperscaler APIs. It gives them the advantage of accessing a ready-made product without a large investment. However, this comes with strategic trade-offs.

One is the limited number of hyperscaler APIs able to deliver solutions for the financial services sector. With only a few players offering advanced AI capabilities, banks have limited leverage when costs increase, or terms become unfavourable. Such concentration means that innovation cycles are determined by the priorities of hyperscalers, not by the needs and demands of the banks. In this sense, the tool is dictating the solution, whereas the solution should be dictating the tool that should be used.

External AI providers tend to optimise their models for broad market appeal rather than for the unique demands of financial services. This leaves banks competing with identical AI capabilities, while missing key opportunities to leverage critical features such as fraud detection algorithms, risk assessment models and regulatory compliance tools that can provide genuine competitive differentiation. That being said, there are still benefits of engaging with hyperscalers, with speed and costs positioned as standout benefits.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

The regulatory implications add another layer of complexity. Routing sensitive customer data through external APIs creates ongoing compliance headaches, particularly as data protection regulations continue to tighten globally. Even with contractual safeguards, banks still face growing regulatory scrutiny over sensitive financial data leaving the institution’s secure environment.

A shift to in-house capabilities

The technical barriers that once made self-hosted AI prohibitively expensive have diminished significantly. Breakthroughs in model architecture, compression, and distillation allow models that previously required massive GPU clusters to now run on smaller, more affordable hardware. Recent open-source developments demonstrate this shift. Advanced models are able to deliver performance that is comparable to that of leading proprietary systems, all while running on standard servers.

New training methods have made it easier for banks to adapt existing AI models without the heavy cost of rebuilding them from scratch. Instead of retraining an entire system, banks can update the parts that need it using a technique known as Low Rank Adaptation (LoRA), refined further through Weight-Decomposed LoRA (DoRA). These new training methods mean banks can fine-tune AI to their needs at a faster rate and at a fraction of the previous cost.

There are also significant advantages for financial institutions once AI is brought in-house. Privacy and compliance immediately become strengths with sensitive customer data remaining securely within the bank’s own environment. This reduces regulatory exposure while reinforcing customer trust in banks as custodians of their data.

Beyond compliance, hosting AI in-house allows banks to specialise in ways hyperscalers cannot. By fine-tuning models on proprietary datasets such as transaction histories, risk profiles or fraud patterns, banks are able to develop capabilities that are unique to their organisation and impossible for competitors to replicate. This level of domain specialisation transforms AI from a generic utility into a differentiator in a competitive landscape.

Trust and differentiation also open the door to stronger customer relationships. Clients naturally favour services that combine cutting-edge intelligence with the reassurance that their data never leaves the institution. Additionally, with self-hosted models, banks gain the agility to experiment, innovate and deploy solutions at their own pace, rather than waiting for third-party providers to roll out new solutions.

Therefore, building tailored AI capabilities is technically feasible and economically viable. This does not mean that banks should not engage with third-party vendors offering AI solutions. However, it does give them the room to choose technology partners who will help them develop and maintain their own AI capabilities instead of waiting for services to be available for a broader market.

Seizing an opportunity

As AI continues to evolve, we can see that what once required tens of millions in investment and massive data centres can now be achieved with orders of magnitude less capital. This makes it realistic for banks to explore building their own AI capabilities instead of relying solely on hyperscaler APIs.

It means there are more ways for financial institutions to approach AI integration in a way that reflects their needs. Some might recognise the ongoing benefits of hyperscalers, while others will develop more confidence to build their own in-house capabilities. However, the solutions are not binary – financial organisations can also consider strategic partnerships with firms that can provide guidance on the appropriate in-house capabilities needed, creating and deploying a roadmap, while empowering the firms to inform the strategy and process.

Ultimately, strategic partnerships that combine external expertise with internal control will allow financial institutions to innovate efficiently, safeguard sensitive data and tailor solutions to their unique needs. It’s an avenue we’re likely to see more financial firms adopting in the future.

Simon Thompson is Head of AI, ML and Data Science at GFT