Banks are increasingly comfortable discussing artificial intelligence, automation, and algorithmic decision-making. What they are far less comfortable discussing is the quiet infrastructure that now sits beneath those decisions: synthetic data, proxy models, inferred behaviours, and simulated realities that increasingly stand in for customers, transactions, and risk.
This shift has not happened through a single strategic decision. It has happened incrementally, often invisibly, through operational necessity. Data gaps needed filling. Testing environments required realism. Privacy constraints limited the use of live data. Time pressure rewarded approximation over completeness.

The result is that many banking decisions today are no longer made on “real” customer behaviour alone. They are made on synthetic representations of reality. And yet, governance structures remain firmly anchored in an era where data was assumed to be observable, traceable, and historically grounded.

This is not a technology problem. It is a governance problem.

From automation to abstraction

Early banking automation was transactional. Systems executed rules designed by humans against clearly defined inputs. Accountability was linear: policy, process, execution, and outcome.

Algorithmic banking changed that structure. Decision logic became probabilistic. Outcomes were shaped by patterns rather than rules. Learning systems adjusted behaviour over time.

Synthetic banking takes this one step further. Decisions are now influenced not only by observed reality, but by constructed reality.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Synthetic datasets simulate customer profiles. Synthetic transactions model flows that have not yet occurred. Synthetic stress scenarios project behaviour under conditions that may never exist. These tools are valuable. In many cases, they are essential.

But they introduce a subtle shift: decisions are increasingly based on what could plausibly be true, rather than what has been demonstrably observed.

That shift fundamentally alters the risk landscape.

Where governance quietly falls behind

Most bank governance frameworks still assume three things:

1. Data represents reality

2. Models operate on observable inputs

3. Accountability can be traced through committees and approvals

Synthetic systems challenge all three.

Synthetic data is, by definition, an abstraction. It is designed to resemble reality without being reality. That resemblance is statistical, not factual. Yet governance structures often treat synthetic outputs with the same confidence as historical evidence.

Model risk frameworks typically focus on model performance, bias, and validation. They are less equipped to assess the ontological status of the data itself. Was the input observed, inferred, simulated, or generated? In many institutions, that distinction is not visible at committee level.

Accountability becomes blurred. When a decision is questioned, responsibility often flows backwards through risk, compliance, and technology committees. But synthetic systems diffuse responsibility. No single function “owns” the constructed reality on which the decision was based.

The risk does not sit within a model. It sits between governance layers.

Synthetic data and false comfort

Synthetic systems are often introduced to reduce risk: privacy risk, concentration risk, regulatory exposure. Ironically, they can create a different form of risk – false comfort.

Dashboards remain green. Back-testing passes. Stress scenarios show resilience. But these assurances are only as strong as the assumptions embedded in the synthetic layer.

When real-world conditions diverge from simulated ones, institutions may not notice immediately. The signals are delayed. The confidence persists. By the time outcomes diverge meaningfully, the organisation is already committed to the decision path.

This is not theoretical. It mirrors earlier failures where internal metrics masked operational fragility. The difference now is that the masking happens before reality fully unfolds.

Synthetic banking accelerates decision-making while weakening the feedback loop.

Committees were not designed for constructed reality

Bank committees evolved to manage human decisions, supported by data. They were not designed to interrogate simulated worlds.

Risk committees ask whether thresholds are breached. Model committees ask whether validation standards are met. Audit committees ask whether controls exist.

Few committees ask more fundamental questions:

  • What assumptions were embedded in the synthetic layer?
  • What behaviours were inferred rather than observed?
  • What uncertainties were smoothed out to make the model usable?
  • Where does simulated confidence replace empirical evidence?

These questions often sit outside formal mandates. As a result, they sit nowhere.

This creates a governance gap that is subtle, systemic, and difficult to detect – precisely the kind of risk that institutions historically struggle with most.

The human accountability problem

One of the most under-discussed aspects of synthetic banking is how it changes human accountability.

When decisions are grounded in historical data, accountability can be challenged: the data can be re-examined, the assumptions debated. When decisions are grounded in synthetic constructs, challenging outcomes becomes harder. The organisation ends up debating models rather than decisions.

Front-line teams may not fully understand the synthetic layers influencing outcomes. Senior executives may trust dashboards without visibility into how those dashboards were constructed. Boards may receive assurance without insight.

Accountability becomes procedural rather than substantive.

This is not a failure of intent. It is a failure of institutional design.

Regulation will follow – but slowly

Regulators are beginning to engage with synthetic data, particularly around privacy and testing. However, regulatory frameworks tend to lag operational reality.

Banks should not wait for prescriptive rules. By the time formal regulation arrives, institutions will already have embedded synthetic systems deep into credit, fraud, pricing, and customer management.

The more resilient approach is to treat synthetic banking as a governance design challenge, not a compliance exercise.

That means:

  • Making the use of synthetic data visible at senior levels
  • Distinguishing between observed and constructed inputs in decision artefacts
  • Re-thinking committee mandates to include epistemic risk, not just model risk
  • Accepting that not all confidence is equal – some confidence is simulated

A quiet design choice facing banks

Synthetic banking is not inherently dangerous. In many cases, it enables safer experimentation and better resilience. But it changes the nature of institutional decision-making.

Banks now face a quiet design choice.

They can continue to layer synthetic systems onto governance structures designed for a different era, accepting growing opacity as the price of speed.

Or they can redesign governance to recognise that decisions are increasingly made in constructed environments, and that accountability must evolve accordingly.

This choice will not appear in strategy decks. It will surface later, in outcomes.

The institutions that navigate this well will not be those with the most advanced algorithms. They will be those that understand what their systems are actually deciding on – and who remains accountable when reality diverges from the simulation.

Dr. Gulzar Singh, Chartered Fellow – Banking and Technology; Director, Phoenix Empire Ltd