ÍæÅ¼½ã½ã

Our Network

Aaron Painter
Contributor

The cyber pandemic: AI deepfakes and the future of security and identity verification

Opinion
May 2, 20245 mins

Attackers have seen huge success using AI deepfakes for injection and presentation attacks ¨C which means we¡¯ll only see more of them. Advanced technology can help prevent (not just detect them).

Serious busy mature professional business man company ceo executive manager investor looking at laptop pc computer sitting in office thinking of corporate management plan and financial strategy risks.
Credit: insta_photos / Shutterstock

Security and risk management pros have a lot keeping them up at night. The era of is fully upon us, and unfortunately, today’s identity verification and security methods won’t survive. In fact, estimates that by 2026, nearly one-third of enterprises will consider identity verification and authentication solutions unreliable due to AI-generated deepfakes. Of all the threats IT organizations face, an injection attack that leverages AI-generated deepfakes is the most dangerous. show that deepfake injection attacks are capable of defeating popular Know Your Customer (KYC) systems – and with a in injection attacks last year and no way to stop them, ÍæÅ¼½ã½ãs and CISOs must develop a strategy for preventing attacks that use AI-generated deepfakes.

First, you’ll need to understand exactly how bad actors use AI deepfakes to attack your systems. Then, you can develop a strategy that integrates advanced technologies to help you prevent (not just detect) them.

The digital injection attack

A digital injection attack is when someone “injects” fake data, including AI-generated documents, photos, and biometrics images, into the stream of information received by an identity verification (IDV) platform. Bad actors use virtual cameras, emulators, and other tools to cameras, microphones, or fingerprint sensors and fool systems into believing they’ve received true data.

Injection attacks are now than presentation attacks, and when used in combination with AI-generated deepfakes, they’re nearly impossible to detect. Attackers use to fool KYC processes or inject deepfake photos and videos to . A prime example is the recent attack that injected an AI deepfake video feed to defraud a Hong Kong company for $25 million. As expected with the rise of Generative AI, AI deepfakes are also on the rise, with Onfido reporting a in deepfake attacks last year. The NSA, FBI, and CISA collaboratively shared their about the threat of AI deepfakes, saying that, “The increasing availability and efficiency of synthetic media techniques available to less capable malicious cyber actors indicate these types of techniques will likely increase in frequency and sophistication.” 

The key to stopping injection attacks is to prevent digitally altered images or documents from being introduced in the first place. And the only way to do this is to leverage advanced security technologies such as mobile cryptography. The cryptographic signatures provided by mobile devices, operating systems, and apps are practically impossible to spoof because they’re backed by the extremely high-security practices of Apple and Android. Using mobile cryptography to determine the authenticity of the device, its operating system, and the app it’s running is a crucial and decisive measure for stopping injection attacks in their tracks.

The presentation attack

Presentation attacks present fake data to a sensor or document scanner with the intent to impersonate an end user and fool a system into granting access. Facial biometrics presentation attacks take many forms, using deepfake ID documents, “face-swaps,” and even hyper-realistic masks to impersonate someone. IDV and KYC platforms use presentation attack detection (PAD) to verify the documents and selfies that are presented, but many PAD techniques can be beaten by injection attacks that leverage AI deepfakes. 

Staying ahead of injection and presentation attacks

Over the past couple of years, we’ve seen thousands of companies fall victim to these attacks. The impacts are incalculable: hundreds of millions of dollars looted, ransomware shutdowns that impact millions of people, personal information stolen, and reputations damaged beyond repair. And the problem is only getting worse. 

The only strategy for stopping these attacks is to use identity verification tools that from happening in the first place and then apply focus on verifying the actual person behind the screen. This way, IT organizations can also shut down human social engineering vectors that circumvent or exploit IDV processes. In addition, by adding verification technologies like device intelligence, AI models, and behavioral biometrics, IT organizations can further reduce the risk of first-party fraud. Finally, invest in solutions that protect your multi-factor authentication (MFA) and password recovery processes: this is a primary attack vector and a key vulnerability that companies often overlook.

Attackers have seen huge success using AI deepfakes for injection and presentation attacks – which means we’ll only see more of them. The key to stopping this threat is to develop a multi-layered approach that combines PAD, injection attack detection (IAD), and image inspection. This strategy forms the basis for companies to navigate the “cyber pandemic” we face and onto a more secure, trusted future.

Aaron Painter
Contributor

Aaron Painter is the CEO of Nametag Inc., the world's first identity verification platform designed to safeguard accounts against impersonators and AI-generated deepfakes. Prior to his tenure at Nametag, Aaron served as CEO of London-based Cloudreach, a Blackstone portfolio company and the world's leading independent multi-cloud solutions provider. He also spent nearly 14 years at Microsoft, where he held various leadership roles, including VP and GM of Business Solutions in Beijing, China, GM of Corporate Accounts and Partner groups in Hong Kong, Chief of Staff to the President of Microsoft International based in Paris, France, and GM of the Windows Business Group while stationed in Sao Paulo, Brazil. Aaron is a Fellow at the Royal Society of Arts, Founder Fellow at OnDeck, a member of Forbes Business Council and a senior External Advisor to Bain & Company. He was named the AWS 2019 Consulting Partner of the Year for his work at Cloudreach. As a frequent media commentator, Aaron has appeared on Bloomberg and Cheddar News. He is also an active speaker, advisor and investor to companies that are pursuing business transformation.

More from this author