Fighting Deepfake Fraud Takes a Layered Approach

Deepfake scams are driving a new wave of financial fraud, usually in the form of new financial account openings, account takeovers, phishing, impersonation and the creation of fake identities. 
Gayle Weiswasser
March 25, 2024

The fight against fraud has met a formidable opponent in AI, which is responsible for the increasing sophistication of deepfake technology and its many implications for wrongdoing and abuse. With deepfake technology evolving so quickly, how can identity verification programs keep up?

What is a deepfake?

A deepfake is content - usually audio or video - that depicts people doing things by mimicking their voice or facial expressions. Deepfakes are very convincing, and are therefore able to trick those who interact with the content into believing that what they are seeing or hearing is real. These deepfakes are created using algorithms and machine learning that detect and study facial features and manipulate them to generate composites that look like someone but are actually not.

Deepfakes are incredibly dangerous, because any time someone can be convincingly impersonated online, the potential for malicious abuse is great. Deepfakes have been used for the following purposes::

  • Financial fraud
  • Spread of misinformation/propaganda
  • Damage to reputation
  • Espionage
  • Generation of nonconsensual pornography, including child pornography
  • Harassment and invasion of privacy
  • Intellectual property confusion or misappropriation
  • Blackmail

The tools needed to create deepfakes are inexpensive and widely accessible, allowing bad actors to easily exploit them for an endless variety of nefarious purposes.

Deepfakes and financial fraud

Deepfake scams are driving a new wave of financial fraud, usually in the form of new financial account openings, account takeovers, phishing, impersonation (which can lead to the indulging of secret or sensitive information) and the creation of fake identities. 

Deepfake fraudsters can use videos to impersonate account holders to issue wire transfers, authorize transactions or gain password or account information. They can also impersonate bank officials or corporate executives in order to issue fraudulent transfer instructions. The possibilities are nearly endless, and the potential damages for financial institutions can stretch into the millions of dollars.

According to Regular Forensics, over a third of companies will experience deepfake voice fraud, and 29% will be taken in by deepfake video fraud. And this is all happening with alarming speed. Deepfake attacks using face swap technology to bypass remote identity verification alone increased by 704% in 2023, according to SC Media.

How deepfakes are made

Most people have a digital footprint sufficient enough for bad actors to use to create extremely convincing deepfake footage. They have technology that deconstructs and manipulates subtle facial features which are then incorporated into synthetic videos. Then, a virtual camera feed with the deepfake is used to replace the webcam that would usually record the participant’s face. 

Deepfakes are incredibly difficult to identify and expose, because they are extremely realistic and people are easily tricked by them. The technology keeps getting better, too, so it’s very hard to keep up with and detect new developments and capabilities in deeptake production. Also, there is no federal law that specifically bans deepfakes. According to TechCrunch, The FTC would like to expand its impersonation rule to cover the impersonation of individuals, not just companies or government agencies. The agency may also criminalize goods and services used to hard consumers through impersonation via the creation of deepfakes.

Let’s also not forget how incredibly accurate it is for AI to now create images of fake IDs that pass online identity verification. Now fraudsters are paying $15 for a fake ID and using simple desktop software to pass biometric selfie comparison tests.. 

How to combat deepfake fraud

Many companies are posting about how easy it is to create a fake ID and a deepfake, and then they proceed to offer a single solution to the problem, one verification method that will make the fraud problem go away. Let’s be honest: we’re not going to defeat the greatest technological achievement since the birth of the internet with one more API.  It’s not that simple, and those who oversimplify the solution are not being intellectually honest. So rather than proposing a single solution, we want to start an ongoing dialogue. 

Deepfake fraud is really hard to combat, especially if people rely on only one method - humans or software - to do it. Neither method is failsafe. Humans and software can both be duped, and both have limited abilities to detect fakes. The most effective way to combat deepfakes is a layered approach that combines human judgment with a biometric platform. 

  • It’s likely that deepfakes are getting so good that they will be imperceivable to humans. But something humans can do is inject randomness into an interaction in real time - think of someone on camera who is asked to perform something completely unexpected. Perhaps they ask a user to verify themselves with additional verification (like a bank login or verify a credit card). Or the user is asked to take a picture of themselves from another device while staying on video. With the appropriate tools, a trusted agent can serve to be the CAPTCHA for deepfakes. 
  • A trust platform needs to offer additional safeguards to confirm people’s identity and therefore signal potential deepfakes. Once the fraudster knows they can get by with a fake ID and video, they may neglect other subtle hints that could give them away. The email they used -  is it real or a fake? The phone number they claim -  is it registered to that person? Did they log in from a suspicious country or device? The credit card they provided - is it real and registered to them? Not knowing which verification method matters and is going to be used will force fraudsters to guess at how to trick your experience.

We believe the best defense against deepfakes is a layered model of data and signals. Instead of just scanning an ID and trusting it, companies have to look at the trustworthiness of the device, the location of the user, the phone number being used, and the behavior exhibited by the person across multiple interactions. Comparing the current interaction to others in the past can also signal suspicious behavior: did the person use this location last time? This device? Where are they usually, and are they there now? And the platform needs to be able to dynamically orchestrate the types of verification requested, depending on the use case, regulatory requirements and law. And finally - and most importantly -  there needs to be a human in the loop so that a transaction can step-up to a person in real time. 

Others agree that relying on a single method of identity verification is not sufficient. Biometric Update warns: “Only the combination of authenticity checks, support for electronic documents verification, cross-validation of personal data and ability to re-verify data on the server side can protect you from fraud and address zero-trust-to-mobile issues.”

graphic of envelop on a square

Subscribe to our newsletter

Related Articles