Interview Opportunity: Deepfakes & Biometrics with iProov

A new study from iProov, the global leader in science-based biometric identity verification solutions, shows that most people are unable to identify deepfakes—hyper-realistic AI-generated images and videos designed to impersonate others. The study tested 2,000 consumers in the UK and US, presenting them with both real and deepfake content. The results were concerning: just 0.1% of participants could correctly distinguish between authentic and fake images and videos.

Key Findings:

  • Detection Failure: Only 0.1% of participants accurately identified all deepfake and real content, including images and videos, even when specifically prompted to spot deepfakes. In everyday situations, where individuals are less vigilant, the susceptibility to deepfakes is likely even higher.
  • Older Generations Are More Vulnerable: The study revealed that 30% of people aged 55-64 and 39% of those 65 and older had never heard of deepfakes, highlighting a significant knowledge gap in these age groups and increased vulnerability to this emerging threat.
  • Video Identification More Challenging: Participants were 36% less likely to identify deepfake videos compared to deepfake images. This presents a serious risk for video-based fraud, including impersonation during video calls or scenarios where video verification is used for identity authentication.
  • Unawareness of Deepfakes: Despite rising concerns, many people remain unfamiliar with the technology. A significant 22% of consumers had never even heard of deepfakes prior to the study.
  • Overconfidence in Detection Skills: Even though most participants struggled to identify deepfakes, over 60% were confident in their ability to do so, particularly among younger adults (18-34). This overconfidence could amplify the threat of misinformation.
  • Impact on Trust in Social Media: The study found that Meta (49%) and TikTok (47%) are seen as the platforms most likely to host deepfakes. This has led to a decline in trust, with 49% of respondents expressing reduced confidence in social media after learning about deepfakes. Despite this, only 1 in 5 people would report a suspected deepfake.
  • Growing Societal Concerns: Three-quarters of participants (74%) are worried about the societal impact of deepfakes, with misinformation being the primary concern (68%). Older generations (55+) are particularly alarmed, with 82% expressing concerns about the spread of fake information.
  • Lack of Action on Deepfakes: Less than a third of people (29%) take action when encountering a suspected deepfake. This is often due to a lack of knowledge on how to report them (48%), or indifference (25%).
  • Failure to Verify Information: With the increasing prevalence of misinformation, only one in four individuals actively searches for alternative sources of information when they suspect deepfake content. Just 11% critically analyze the source and context of content to determine its authenticity, leaving the vast majority vulnerable to deception.

Expert Insight:

Professor Edgar Whitley, a digital identity expert at the London School of Economics, comments: “This study underscores the growing threat of deepfakes, showing that both individuals and organizations can no longer rely on human judgment to identify these threats. A new approach to authentication is essential.”

Andrew Bud, founder and CEO of iProov, adds, “Only 0.1% of people could accurately identify deepfakes, which shows how vulnerable we are to identity fraud in this digital age. Even when people suspect a deepfake, they often take no action. Cybercriminals are exploiting this weakness to target personal and financial security. To safeguard against this, technology companies must implement stronger security measures, such as facial biometrics with liveness detection, ensuring robust authentication while empowering users to remain protected.”

The Growing Threat of Deepfakes

Deepfakes are rapidly evolving, posing a significant challenge in today’s digital world. iProov’s 2024 Threat Intelligence Report revealed a 704% increase in face-swapping deepfakes in just the last year. Deepfakes are a powerful tool for cybercriminals, enabling them to impersonate individuals and gain unauthorized access to sensitive information. They can also be used to create synthetic identities for fraudulent activities, like opening fake accounts or applying for loans.

What Needs to Be Done?

As deepfakes become more advanced, human detection is no longer sufficient. Organizations must adopt technological solutions, such as advanced biometric systems with liveness detection, to verify the authenticity of individuals in real-time. These solutions must include ongoing threat detection and continuous security improvements to stay ahead of evolving deepfake tactics. Additionally, increased collaboration between technology providers, platforms, and policymakers is critical to minimizing the risks posed by deepfakes.

Take the Deepfake Detection Test

Think you can tell the difference between real and fake? Put your skills to the test with iProov’s deepfake detection quiz! See how well you can distinguish between authentic and synthetic content.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button