Deep fake technology — which creates realistic audio and video footage of a person, even though it’s fake — is becoming increasingly sophisticated and criminals are using it to target retail banks. So far, many are not fully aware of the threat it poses. Jane Cooper reports

It looked like Obama, it sounded like Obama, but it wasn’t actually Obama. It was, in fact, a deep fake — a video that was created by Buzzfeed in April 2018 — that had Barack Obama speaking words he had never said.

The words belonged to the actor Jordan Peele, whose mouth had been merged with real footage of Obama, like a video and audio Photoshop job that was powered by artificial intelligence. In the world of politics, fake news and misinformation, such technology is becoming commonplace. For retail banks, the threat is approaching rapidly.

Avivah Litan, vice president and distinguished analyst at Gartner, predicts that in three years’ time deep fake technology will be used in one out of five account takeover attacks. “Right now, it might seem like a far-away problem, but in less than three years it will be right in front of us,” she says.

The technology is already sophisticated, and free versions — such as the ReFace app — are easily available. Users can upload their selfie and have their image superimposed into classic movie scenes or music videos.

Other positive uses of the technology have also been developed: footballer David Beckham was able to speak nine languages in an April 2019 ‘Malaria Must Die’ campaign video, and other celebrities have been edited into old movies for fun. There are sinister uses, however, such as superimposing famous people into pornographic films.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Account authentication, brand protection

At the moment, deep fakes are a big issue in political disinformation and pornography, says Litan. The largest use of it is in ‘revenge porn’, where an image of an ex can be merged with the body of a star in the adult movie business and circulated for revenge.

For retail banks, the largest threats are in account authentication — either authenticating an individual at the onboarding stage, or in gaining access to an account — and also with brand protection.

A bank CEO, for example, could feature in a video saying words they have never said — which if racist, sexist, or offensive in another way — could be catastrophic for the bank’s reputation and share price.

Deep fakes are also being used in more sophisticated versions of the classic phishing scam. In one publicised case, an executive at a UK energy company wired over $200,000 to a bank account after being told to do so by the group CEO.

But it wasn’t the CEO they were speaking to, and it wasn’t a company account — criminals soon dispersed the funds across a network of accounts, with the money disappearing without a trace. Deep fakes in this case used artificial intelligence to learn the sound of the voice, accent, tone and cadence.

With natural language processing and natural language generation, the bot was able to have a real conversation with the executive. On a mass scale, this could be developed so that bots like this call contact centres and access accounts on behalf of retail bank customers — changing their details or wiring money to other accounts.

Potential threat to digital onboarding

Another major risk for retail banking is with money laundering, where deep fakes could be used to open accounts on behalf of unknowing customers. Andrew Bud, founder and CEO of iProov, comments that this is a problem where it is possible to do remote onboarding of customers.

In Germany, for example, it is possible to open an account purely online with just a video call to authenticate the customer — the retail bank staff do not need to see the customer in the flesh to check if they are a genuine person. The risk with deep fakes, says Bud, is that they could make video call onboarding “dangerously obsolete”.

Fake identities could be used to open bank accounts for fraud or terror financing, says Philipp Pointner, chief product officer at Jumio. In the past, he explains, a government-issued identity document, such as a driver’s licence or passport, would have been enough to open an account online.

The ID could be stolen, or an image of a driver’s licence could be found on the internet, and that would be enough. If the document was checked against a national database, it would be verified because it was a genuine document — there wouldn’t have been a check that the person applying for the account was the same as that on the ID.

Pointner explains that it then became necessary to check the face of the applicant and the face in the photo ID is the same. “Then we found that is not enough — we need to make sure that it is a live person,” he says, by asking them via video to move their face to the right and to the left, for example, to see that it is a live, moving person.

However, deep fakes have overcome this security hurdle: the image of the person on the ID could be superimposed onto the body of the criminal, who is following the instructions to prove they are a live person.

One way to counter such attacks, Pointner explains, is to ensure that it is not possible to insert videos that haven’t been made on the device that is being used for verification. Jumio’s authentication solution, for example, caters for this and means it is not possible for another video to be inserted into the stream when a new customer is being onboarded.

iProov proprietary solution

Bud’s company iProov has a proprietary solution to counter such deep fake attacks. It assures that there is a genuine presence, which Bud explains is different from liveness. “It is not just is this the right person, but whether they are a real person and they are here right now,” he says.

The solution uses a proprietary ‘flashmark’ that is sent from iProov’s servers to the user’s device, which illuminates their face with a random one-time sequence of colours. This is then analysed in a video of the user’s face, to confirm that the sequence being reflected is the same as the one generated by iProov. The solution is also convenient: the user does not have to do anything aside from letting the colours be reflected on their face.

Such solutions are necessary now that deep fakes are becoming increasingly sophisticated. Jumio’s Pointner says: “If you look at the quality of the deep fake, it looks real — to the human eye it is very difficult to see that something has been manipulated.”

For the banking industry, Litan at Gartner comments: “This is definitely going to be a problem because deep fakes are getting easier to make.” And it is possible to order something quite professional — such as a bespoke request of a friend doing something in a particular situation — for a few hundred dollars or less.

Bud has been on the receiving end of such technology as a prank. At his company’s weekly meeting, which is held over video conference, “four people turned up as me. It was very spooky. The face was me, but the expressions were them,” he says. He also adds that this was created with free open source deep fakes, which are available online “They are primitive but very good,” he says. “Deep fakes are getting better and better very quickly.”

Deep fakes — a term that merges deep learning with fakes — use artificial intelligence to constantly improve their quality. They use a generative adversarial network (GAN) where a generator and discriminator constantly feed off each other to learn and improve.

The generator creates the deep fake, the discriminator assesses if it is genuine or not. If the discriminator identifies that it is fake, this information feeds back to the generator so it gets better at creating the fake. And vice versa: when the generator gets better at making a fake, the discriminator gets better at identifying it.

This rapidly advancing technology does not bode well for the retail banking industry. Pointner at Jumio comments that the challenger banks are aware of the threats; they have been born digitally and so from the outset have had to face these sorts of issues. Of the big institutions that have been forced to move online quickly by comparison, “This stuff is not on their radar,” says Pointner.

Regional variations

“A growing number of banks understands the risk with account opening,” says Bud. He adds there are regional differences in the response of banks to deep fakes. He notes that the Netherlands is the most advanced — (the Financial Times reported in September 2020 that a number of Dutch institutions, Rabobank, ING and Knab, are using iProov’s solution) — and also South Africa and Singapore are aware of the threat of deep fake technology.

Retail banks are reluctant to talk about their security arrangements and when contacted by Retail Banker International did not want to comment. A spokesperson for Rabobank, for example, responded that it cannot comment on “security and attack issues,” but did say: “Rabobank continuously monitors and anticipates various forms of threats; we keep a close eye on developments. And of course, we continuously monitor the security of our systems.”

Of the response by retail banks to this threat, Litan says: “I do not think it is a big issue for them yet. It is coming.” She adds, “They are aware they need to know about it — in the UK more so than the US,” she says, putting this down to more facial recognition technology being used in the UK.  At the moment, however, Litan says that “the bad guys do not need to do this yet” as they are able to be effective with even less sophisticated methods. “They can get away with cheap fakes,” she jokes.