For the past decade, a cloud-first philosophy dominated technology strategy, promising scalability, flexibility and cost efficiency. Today, a fundamental shift is underway across government agencies and regulated industries worldwide. That mantra is being quietly rewritten to sovereign first.
Data sovereignty, including the ability to maintain meaningful, verifiable control over data, technology, operations, and legal exposure has become a business imperative. It is not about isolating everything in one environment, but ensuring organisations dictate how sensitive information is stored, processed, governed, and protected across all environments. In the digital economy, this distinction matters. Data sovereignty has become a geopolitical, economic, and security imperative shaping national policy, enterprise risk posture, and competitive advantage.

The rising tide of digital borders

Regulatory frameworks such as GDPR in Europe and PIPEDA in Canda were important catalysts, but they represent only the beginning. In India, the Digital Personal Data Protection (DPDP) Act, RBI data localisation mandates and SEBI cloud guidelines reflect a broader shift toward digital autonomy and national security. Governments worldwide are actively integrating data sovereignty into their national security strategies.

Geopolitical shifts are driving this transformation. More organisations are paying attention to the extraterritorial reach of foreign legal systems, particularly laws that can compel access to data stored by global service providers outside the country where the data originated.

Even if the data is physically in-country, foreign ownership and foreign legal obligations can still create exposure, this is precisely the loophole legal sovereignty is meant to close. These concerns are compounded by cyber threats, where jurisdiction and control over data infrastructure directly influence an organisation’s ability to protect sensitive information.

Economic consideration completes the picture. Governments are reframing data as a sovereign asset, the ‘fuel’ of the modern economy and are legislating to ensure that the economic value created from their citizens data remains within national borders. This form of digital protectionism mirrors historical resource nationalism. Place critical data in the wrong environment or with the wrong partner, and organisations risk losing control. The basic requirement is clear: you must know where your data lives, who manages the infrastructure on which it sits, and whose laws govern it.

The AI paradox: Why intelligence complicates sovereignty

The rise of AI introduces a new paradox at the intersection of business value and customer trust. To get the most value from data and gain competitive insights, organisations must often expose that data to AI models trained and hosted elsewhere. But this creates risks.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Large Language Model (LLM) exposure risk represents the first critical vulnerability. When organisations feed sensitive data into public LLMs for tuning or retrieval-augmented generation (RAG), that data may travel through systems that reside outside their sovereign jurisdiction.

Data lineage, access rights, retention practices, and model behaviour often remain opaque, creating governance gaps undermining compliance and trust. In sovereignty terms, the issue is rarely just “where the data is stored” but where it is processed, which control plane governs it, and whether external operators or foreign legal regimes can reach it.

The inference compute bottleneck compounds this risk. Even when data is stored locally, the massive computational power required to run AI inference, where models process and generate responses may be routed to offshore environments due to GPU scarcity or platform architecture. In most regulatory frameworks, that constitutes a sovereignty breach.

For instance, even transient overseas processing of regulated data can violate localisation requirements and expose organisations to regulatory risk. If data remains in-country but inference occurs elsewhere, has sovereignty been preserved? In most legal frameworks, the answer is no.

The business impact is significant. According to a 2025 study, 80% of customers report discontinuing relationships with brands over data privacy concerns, turning sovereignty lapses into reputational damage, customer churn and long-term revenue risk. This is the AI paradox in its purest form: the more valuable your data becomes as an input to AI systems, the more vulnerable it becomes to sovereignty violation.

How architecture is coming as a solution

The good news is that organisations are moving away from binary public versus private cloud debate toward a spectrum of sovereign requirements.

Sovereign public cloud regions represent the first major advancement. Hyperscalers are now creating sovereign repositories within specific regions. These data centres are physically located in-country and often operated by a local trusted partner, a national telecommunications provider or IT provider to create a legal air gap. This arrangement prevents unauthorised access to data by foreign entities. Private AI clouds also provide a solution to address the most sensitive data needs. Organisations are deploying open-source or proprietary models in tightly controlled environments where neither training data nor inference prompts leave organisational custody. This enables AI adoption without sacrificing sovereignty across the stack, data, legal exposure, technology control and operational access.

Hybrid sovereign architecture offers the most pragmatic path. Less sensitive data and processes utilise the cost-efficiency of global public cloud regions while critical data processes remain within controlled sovereign environments. What matters is not where infrastructure resides, but whether organisations can prove governance, auditability, and legal accountability across every layer including control-plane ownership, encryption key control and who operates the environment.

Strategic advice: Data sovereignty as strategy

Adopting a data sovereignty strategy is fundamentally a classification and architecture challenge, not a procurement exercise. Organisations moving in this direction should focus on three key priorities:

First, audit and classify data assets by sovereignty requirements to determine geographical location and identify datasets that must comply with local jurisdictions. Second, establish infrastructure that allows AI models to operate directly on data rather than sending data to external systems, ensuring inference compute and data repositories remain within sovereign boundaries. Third, scrutinise cloud contracts to ensure metadata clauses prevent service providers from claiming ownership of generated data that can be reverse engineered to reveal competitive intelligence.

Data sovereignty is a strategic imperative

Data sovereignty is often framed as a cost or a constraint, but this view underestimates its strategic significance. Data sovereignty is a strategic imperative – it is the difference between owning your competitive advantages and licensing them from foreign corporations, potentially subject to foreign governments. The organisations that succeed in the next decade will not be those most aggressively cloud enabled. They will be those who strategically deploy their data and AI capabilities within frameworks that protect their sovereignty while still capturing innovation at scale. The question is no longer whether to pursue data sovereignty, it is how to build it strategically into the everyday architecture.

Shannon Bell, EVP, CIO & CDO, OpenText