The world witnessed in 2004 how moments of crisis spark an outpouring of generosity. After the Indian Ocean tsunami, more than $6bn was donated, often in small, heartfelt amounts. Today, extremist groups mimic that same urgency-driven model, targeting people’s goodwill to fund violence. Terror financing can happen through everyday digital donations; small, routine, and almost invisible. A $25 contribution to a seemingly legitimate charity site may, in reality, support violent attacks against women and children.
In early 2025, the US Treasury’s FinCEN issued an advisory warning: terrorist organisations like ISIS and its affiliates are exploiting digital donations, shell charities, and virtual assets to finance operations and circumvent global sanctions. This advisory wasn’t an isolated event; it was a response to a disturbing pattern that is rapidly accelerating in scale and sophistication.
Digital giving platforms, once seen as purely forces for good, have become attractive targets for bad actors. Designed for convenience and speed, these platforms allow anyone to contribute $10 or $20 to a cause within seconds. But this very frictionlessness — the seamless flow of funds, the minimal verification, the emotional appeal — has become a blind spot in the fight against terrorist financing.
The new face of fundraising abuse
Nefarious donation schemes may be difficult to uncover, but tend to have specific red flags. Cryptocurrencies like Bitcoin, Monero, and Ethereum are ideal tools for illicit fundraising: borderless, fast, and pseudo- anonymous. Terrorist organisations exploit this infrastructure with remarkable agility. They launch short-lived donation campaigns, often lasting just long enough to collect donations, disappear, rebrand, and repeat. If a charity was registered only recently and lacks any track record, that’s a major red flag.
In many instances, sham charities will cover up their limited track record by utilising forged images and documents from other charities. These entities operate without meaningful oversight, with visuals and narratives copied from legitimate humanitarian efforts. Groups with ties to extremism frequently use emotionally charged but vague language to appeal to donor compassion: “Help the children,” “Emergency response needed,” “Act now.” What’s missing is transparency and that should be a red flag. Without detailed information, there’s no way to know whether your money is buying food or weapons.
In 2020, US authorities seized millions in cryptocurrency and dismantled over 300 wallets and websites as part of a crackdown on three major terrorist fundraising campaigns. One site, FaceMaskCenter.com, posed as a Covid-19 PPE supplier, but was actually distributing counterfeit gear and routing payments directly to ISIS. Telegram campaigns by groups linked to al-Qaeda have followed similar patterns, often under the disguise of humanitarian relief.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe technology gap
Despite years of advancement in financial crime detection, many platforms still operate in isolation. They do not share intelligence, analyse network activity, or screen for high-risk behaviours across the digital ecosystem. This leaves gaping holes in the system.
Bad actors know this. A terrorist-linked campaign might solicit funds on one platform, transfer crypto through a second, and cash out through a third, each step too subtle to flag individually. But when viewed as a network, the connections become clear: clustering of small donations from high-risk geographies, recurring campaign templates, or bursts of activity tied to new wallet addresses. These indicators often fly under the radar when AI and machine learning tools aren’t in place.
AI-powered analytics can process massive volumes of transactions, flagging anomalies that human analysts might miss. Network analysis tools can map links between donors, wallets, and platforms, revealing the web beneath the surface, something nearly impossible to do manually at scale.
Yet despite the growing threat, the financial institutions facilitating these transactions: banks, fintechs, and payment service providers, often rely on outdated, rules-based monitoring systems that are blind to the unique patterns of terrorist financing through digital donations. While charities themselves are not subject to AML regulations, the institutions processing their payments are. That makes it essential for these regulated entities to adopt AI tools that can detect hidden risks before they escalate.
Without intelligent systems that can spot subtle patterns such as bursts of microdonations, geographic clustering, or transactional links to known threat actors, illicit campaigns will continue to slip through the cracks. Leveraging AI allows resource stretched compliance efforts by banks, financial institutions, and fintechs to flag many more relevant cases, matching and effectively countering the rapidly growing terrorist financing threat.
Digital fundraising must adopt risk-scoring tools like traditional transfers
As digital fundraising becomes more deeply embedded in the financial ecosystem, it must be treated with the same scrutiny as traditional transfers. Platforms must adopt risk-scoring tools that go beyond static KYC checks and leverage AI for transaction screening. Even a handful of misdirected donations can fund violence. To protect trust in digital giving, responsible design, oversight, and modern threat detection are no longer optional; they are essential for public trust, national security, and the future of humanitarian aid.
Peter Reynolds is CEO of ThetaRay