The advancement of deep fakes makes biometric authentication weaker due to the ability of sophisticated AI-generated media to convincingly replicate physical traits such as facial features, voices, and even behavioural patterns. Deep fakes can create realistic images and videos of individuals, potentially allowing attackers to spoof biometric systems that rely on facial recognition, voice recognition, or other biometric data. This undermines the reliability of biometric authentication, as the systems may be tricked into granting access based on these forged credentials.
Industry leaders and cybersecurity experts have highlighted these concerns. Gaelan Woolham from Capco points out that deepfake technology can mimic voices and faces with high fidelity, making it challenging for existing biometric systems to distinguish between real and fake identities. This technology can bypass voice biometric systems used by financial institutions, as demonstrated by a University of Waterloo study which showed that deepfakes could fool such systems in a few attempts (Capco-Homepage).
Moreover, Sensity, a security company, tested facial recognition systems and demonstrated that deepfake technology could easily bypass liveness detection—a critical component of facial recognition security (1Kosmos). Liveness detection typically relies on recognizing natural human behaviours like blinking and subtle facial movements, but deepfakes can replicate these actions convincingly.
The recent significant development is the easy deployment by non-experts; making it easier for individuals to create realistic fake videos and images, including those used for biometric cloning. The availability of sophisticated yet user-friendly applications such as FakeApp, ReFace, and DeepFaceLab has democratized access to deepfake creation tools, allowing even non-experts to produce convincing fakes (BioID).
Deepfakes can pose serious threats to biometric authentication systems, especially those that rely on facial recognition. These attacks can be categorized into presentation attacks, where fake images or videos are presented to a camera or sensor, and injection attacks, where data streams or communication channels are manipulated. Examples of deepfake attacks include face swapping, lip-syncing, and gesture or expression transfer, all of which can deceive biometric systems (1Kosmos)
The ease with which deepfakes can be created and the increasing sophistication of these technologies highlight significant vulnerabilities in biometric authentication. As deepfakes become more realistic, the challenge of detecting and preventing such attacks grows, making traditional biometric systems potentially less reliable without additional security measures like liveness detection and multi-factor authentication.
These biometric vulnerabilities have a direct impact on other forms of Multi-factor Authentication (MFA) like Authentication mobile Apps or ”Password-less” methods. The often ignored but most important main facilitator in this threat is that most MFA installations are accomplished by self-provisioning, for example by receiving an email link to download the App that helps installs the process.
Most CEOs have been encouraged to extend their public social profile by giving presentations or announcements in posts on public platforms. Eleven Labs (Eleven) can make a passable cloned voice with 60 secs of any training audio, can refine pitch, rate and add custom phonemes and have text input generate the appropriate audio. Consider a call from the Attacker’s text to the generated cloned CEO voice made to the Help Desk Assistant requesting set up of new phone or wanting an email link to re-establish their MFA.
CIOs are mandated to use MFA methods for privileged access but selecting what MFA methods are appropriate and how best to deploy them are crucial decisions and need considered evaluation.