On 11 November, at London’s JournalismAI Festival 2025, Liz Lohn, Financial Times director of product and AI, told an audience that AI disclosure presented a challenge for the company’s trusted, premium brand.
“People pay a lot, and whenever they see an AI disclaimer, that erodes trust and creates the feeling that AI in journalism equals cheap,” says Lohn. In fact, Lohn admits that there have been instances of readers cancelling their subscription citing AI disclosure as the reason.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
Walking this delicate path of AI disclosure is not unique to the FT. But the company serves as a bellwether for difficult decisions around AI transparency that all companies are starting to face. And the FT’s success at weathering successive technology transformations makes its approach particularly noteworthy.
The FT is being very careful, explains Lohn, about the manner of disclosure. “If the content has been through human review, do we really need to put in the disclaimer that AI at some point has been used in the process, or is the responsibility on the on the journalist, on the FT?” says Lohn, adding: “because the disclaimers do really erode trust.”
Even when there is no human in the loop, content needs to be viewed as trustworthy. “Again, we’re trying to be very careful with it,” admits Lohn. Both instances need to maintain a very high quality of output, so that the engagement uplift can counter the negative impact of disclosure and lost users who reject the brand as a result of disclosure.
Transparency a cornerstone of AI ethics
The FT’s real-world experience runs counter to the received wisdom on AI transparency. Widespread AI adoption over the last three years has given rise to an accompanying and burgeoning area of AI ethics. Within this discipline, AI transparency has been a cornerstone of building user trust. The evolution of this approach includes the idea of ‘radical transparency’, a phrase popularised in 2017, well before the AI revolution, by hedge fund Bridgewater Associates founder, Ray Dalio. The approach advocates open communication within businesses, which in practice means sharing company information, data and processes with employees and, to some extent, the general public. This, says Dalio, is the most direct route to building trust.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataAs business grapple with the moral, philosophical and regulatory quandaries presented by implementing AI into their everyday work processes and products, the ethical AI movement has reinforced this idea of transparency as the way towards greater trust in the technology and the brands using it. But this may all change.
AI disclosure erodes trust
Indeed, a study earlier this year reinforced this counter narrative when it concluded that AI disclosure eroded user trust in almost all scenarios. The Transparency Dilemma: How AI Disclosure Erodes Trust study by Schilke & Reimann, ran 13 experiments with over 5,000 people and discovered a hidden cost to AI transparency. The study found that AI disclosure reduced legitimacy based on growing negative assumptions about what it means to use AI.
The study’s co-author Oliver Shilke explains: “If you’re transparent about something that reflects negatively on you, the trust benefit you get might be overshadowed by the penalty for what you revealed,” he said. “There’s a trade-off.”
The study found that AI disclosure reduced trust for users across various roles and tasks. A consistent drop in trust was found for AI disclosure in drafting content, proofreading, or providing structural suggestions. This is true even if disclosure is voluntary. Mandatory disclosure doesn’t help as people still rate disclosed work as less legitimate. The only moderating effect was in people with a more positive attitude toward technology, or who perceive AI as accurate, though the drop in trust still exists. And predictably, the study found that erosion of trust is stronger when AI usage is exposed rather than self-disclosed.
AI transparency is a business risk
Nevertheless, most businesses are taking the AI full disclosure approach. AI transparency is an issue as all businesses automate customer services in some way or another, says CRM platform Zendesk’s European CTO Mattias Goehler. According to Zendsesk’s own research, around 25% of all customer interactions are high value interactions which may include complicated customer service tasks, upselling and complex questions. But the other 75% are ripe for some form of AI automation. For these, the received wisdom is to let the customer know they are interacting with an AI agent.
Ghoeler says that Zendesk’s best practice recommendation to its customers is transparency around each customer interaction. And even though the decision ultimately lies with Zendesk’s customers as to how they field their AI agents, “I have rarely seen it any other way,” says Goehler.
“Of course, it’s all in the script and how the customer wants to announce and approach this,” he says. But the underlying premise of transparency is how the overwhelming majority of businesses are implementing AI into their customer service processes.
Transparency will become even more critical when Zendesk launches its AI voice agents at the beginning of 2026, a move that demonstrates the extent of AI penetration into the company’s customer service offering which has become truly omnichannel.
Gheoler’s view on the best way to build customer trust is to simply make the technology work well and resolve the problems it was designed to solve. “Coming up with the right answer, and if not, handing over to a human that can solve the problem,” Gholer says is the right route to customer trust over time.
Zendesk’s own research which examined over 15 million customer service interactions (including human, AI augmented and fully automated) found that 47% of customer service interactions could be classed as failed interactions. This leaves a significant room for improvement and, in the process, the opportunity to build trust in the methods used to improve this rather dim success rate.
Another company taking the full disclosure route is the UK’s BBC. Executive news editor, digital development, Nathalie Malinerich told the audience at the JournalismAI Festival 2025 that the corporation is very careful about the wording of AI disclosure and is very “descriptive about what we do, so that it’s very clear which bits have been AI assisted. We do disclose everything. So, whether it’s assistance with translation, assistance with summarisation, it is all disclosed.”
But Malinerich thinks that the user’s relationship with AI disclosure will certainly change over time, as people become used to the idea of AI assisted or generated content. “We know that, there are big differences generationally, between acceptance of AI and I think over time you’d expect people to translate with AI, for example.”
Will AI adoption help the AI trust penalty?
It’s still unclear whether the AI transparency penalty will lessen over time. But, if AI becomes more reliable, disclosing its use may well have less of a negative effect on trust.
The British Standards Institute’s (BSI) common standard (BS ISO/IEC 42001:2023) provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system (AIMS), ensuring AI applications are developed and operated ethically, transparently, and in alignment with regulatory standards. It helps manage AI-specific risks such as bias and lack of transparency.
Mark Thirwell, the BSI’s global digital director, says that mechanisms for mutually agreed standards are critical for building trust in AI. The BSI’s own research found that having a marker of trust such as a BSI common standard works to build assurance in an AI model’s safety. On the transparency-to-trust equation, Thirwell is focused on transparency of underlying training data rather than whether an output is disclosed as AI generated. Trustworthiness is built at AI conception, as well as at the generation phase, in his view.
“You wouldn’t buy a toaster if someone hadn’t checked it to make sure it wasn’t going to set the kitchen on fire. It’s the same for AI, with a few more dimensions than simply does it work well?” he explains.
Thirwell posits that common standards can, and must, interrogate the trustworthiness of AI. Does it do what it says it’s going to do? Does it do that every time? Does it not do anything else – as hallucination and misinformation become increasingly problematic? Does it keep your data secure? Does it have integrity? And unique to AI, is it ethical?
“If it’s detecting cancers or sifting through CVs, is there going to be a bias based on the data it holds?” This is where transparency of the underlying data becomes key. “If I get declined for a mortgage application online because an AI algorithm decides I’m not worthy, can I understand why that is? Can I contest it?” says Thirwell.
Thirwell’s view of how the transparency-to-trust roadmap will develop over the next decade centres around specialist use cases. “Medical devices, AI for biometric identification, and other really niche use cases will make a big difference. And trust will grow as these evolve into place. But there’s a lot of work needed on governance and regulation, to clarify and make it easier for organisations to adhere to regulation. Once that’s addressed, that that will help to grow trust.”
The caveat to tangible and beneficial use cases building trust is that it will only happen as long guardrails are put in place. Otherwise, it only takes “one big front page issue that’s going to rock trust,” warns Thirwell.
