A number of notable key takeaways jump out from the 2026 Anti-Fraud Technology Benchmarking Report.

For example, 77% of fraud fighters say deepfake attacks are on the rise. 55% expect such ploys to increase significantly more over the next 24 months but only 7% feel firmly prepared to stop them.

Respondents represent more than a dozen industries, most prominently government and public sector and banking and financial services, alongside meaningful participation from professional services, manufacturing, insurance, technology, education, energy and health care. 

Deepfake attacks are closely followed by consumer fraud/scams (75%), generative AI document fraud/forgery (75%) and deepfake digital injection (72%).

Governance lags dangerously behind AI adoption

There are also some notable contradictions. None more so than this one: 75% of respondents consider AI models’ bias or lack of fairness an important factor for adopting the technology, but only 18% of respondents’ organisations test their AI models for bias or fairness.

Similarly, 82% say explainability is important, but just 6% feel completely confident explaining how their AI/ML models make anti-fraud decisions. The report, rightly, flags up an issue here for banks, insurers and other regulated entities in particular. Deploying AI in this manner risks regulatory consequences and legal liability on top of reputational damage and so it should.

AI and machine learning (ML) adoption are accelerating but, the report concludes, remain far from ideal. One-quarter of organisations (exactly 25%) now use AI/ML in their anti-fraud programmes, according to respondents, up from 18% in 2024. Another 28% expect to adopt it by 2028. For organisations still on the sidelines, the window to build AI competency before competitors and criminals widen the gap is narrowing fast.

Budgets are growing – but so are constraints

More than half of respondents (55%) expect their organisations to increase their anti-fraud technology budgets over the next two years. Even so, budgetary and financial restrictions remain the leading barrier to implementation, cited as a major or moderate challenge by 84% of respondents.

GenAI is moving from aspiration to application

Although only 16% of respondents indicate their organisations currently use generative AI as an anti-fraud tool, another 58% plan to in the future. Among those already using GenAI, top applications are phishing and scam detection (49%), risk identification/assessment (46%) and report writing (45%).

AI agents are hotter still

Nearly one in 10 (8%) of respondents say their organisations use agentic AI for fraud fighting, and nearly one-third (31%) more expect to deploy it by 2028 – the highest near-term adoption expectation of any emerging technology category examined.

Physical biometrics leads emerging tech adoption

The use of physical biometrics is now the most widely adopted emerging technology in anti-fraud programmes gauged in the study, used by nearly half of organisations (45%) surveyed – up from roughly one-third (34%) in 2022. In contrast, cloud-native fraud detection platforms and automation remain significantly underutilised, used by only 10% and 29% of organisations, respectively.

Quantum computing’s potential impact

Most respondents (62%) expect quantum computing and quantum AI to materially impact fraud detection and prevention by 2030 – and a surprising 11% say it already is.

Further information and access to the full report is available via this link