AI deepfakes are increasingly difficult to identify, and only one in one-thousand people are now able to recognize them as fakes. This statistic comes from iProov, a British biometric authentication firm, that recently tested the public’s proficiency at detecting AI-generated content by presenting 2,000 individuals from the UK and the U.S. with both genuine and fake digital content.
The results were sobering: an overwhelming 99.9% of those tested failed to differentiate between real and AI-faked content. Many participants believed they performed well, but their over-confidence did not match their measured skills. The study showed that over 60% of those surveyed expressed certainty in their ability to detect AI-generated information; in fact, their actual success rates were extremely low. For those seeking the detailed truth about their AI-detection skills, their quiz results were released in tandem with the study’s findings.
This result comes at a time when deepfake incidents hold significant media attention. In a news story from January 2025, a French woman named Anne fell victim to a scam involving €830,000, as fraudsters used deepfakes to impersonate Brad Pitt and fabricated a televised announcement about him. While Anne faced criticism for her gullibility, she is not unique in being deceived by deepfakes criminals. According to ID verification firm Onfido, incidents of deepfakery occurred every five minutes last year, contributing to approximately 43% of all fraud attempts.
iProov’s CEO, attributes the rise in deepfake scams to four significant trends: the rapid advancement of artificial intelligence, the enabling of more realistic deepfake creations, the proliferation of Crime-as-a-Service protocols (which provide affordable access to sophisticated attack technologies), and the great international weaknesses inherent in traditional identification verification methods.
The accessibility of deepfake creation tools has skyrocketed, allowing attackers to transition from basic “cheapfakes” to advanced technologies that rapidly produce convincing media releases. iProov’s CEO said, “Deepfaking has become commoditized…,” indicating that the current lack of sufficient defenses against AI-generated threats poses a growing risk to individuals and organizations. iProov suggests the adoption of biometric systems integrated within AI-driven defenses to effectively combat such attacks. Although, how that would have helped in Anne’s case is unclear.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…