Digital identity is under siege. From mimicked video calls of CEOs authorizing wire transfers to voice-spoofed phishing attempts, synthetic threats are outpacing many traditional Identity and Access Management (IAM) systems.
Sagi Rodin, CEO, Frontegg, reflects on the growing accessibility of generative AI tools as a critical factor. He says, “GenAI has made it easy to create convincing synthetic identities and impersonations, including fake biometric data and AI-generated voices.” Citing an infamous case in Hong Kong from last year, in which a finance worker was tricked into wiring $25 million after joining a video call filled with deepfaked colleagues and a fake CFO, he cautions that such attacks highlight how traditional identity verification is becoming increasingly vulnerable.
Brian Pontarelli, FusionAuth CTO, unpacks how AI exacerbates modern identity attacks, saying, “GenAI enables multiple routes for attackers that they didn’t previously have access to, including voice phishing, biometric spoofing, and automated social engineering. Fraud groups are using cutting-edge technology to scale their operations, creating fake identities and executing fraud campaigns either by appearing like real people or pretending to be them very convincingly.”
This prompts the question: how can CISOs build IAM strategies fit for a world where synthetic identities and deepfakes are already a reality?
Human vs. Machine Authentication Dilemmas for IAM
At the heart of IAM lies a critical question: Can we verify “who” is accessing our systems? With deepfakes, this becomes far more complex. When an attacker can replicate a user’s voice, face, or behavioral pattern, how do systems tell the difference between human and machine-generated identities?
Conventional IAM infrastructure is no longer enough as GenAI equips attackers with increasingly advanced capabilities. “What it comes down to is the eerie reality that traditional IAM systems rely on 'something you know, have, or are’ – but GenAI can now synthesize all three convincingly, or use its other tools to find or ‘fake’ them,” says Brian. He points out how behavioral biometrics is still considered optional, saying, “Generally, credentials are the source of truth as well, and behavioral dynamics are a ‘nice to have’, which is a problem against these kinds of emerging technologies.”
Sagi reflects the limitations of traditional IAM systems: “Legacy IAM systems were built to authenticate static inputs like passwords, one-time codes, or facial scans. They weren’t designed to detect dynamic, AI-driven threats such as real-time impersonation or behavioral mimicry.”
CISOs are now faced with this new kind of authentication dilemma – building systems that can sense the “realness” of an identity in a world where reality can be artificially generated. “Without the ability to assess whether an identity is being faked or manipulated, these systems are easily bypassed by modern attackers,” adds Sagi.
What Deepfake-Resilient IAM Looks Like
Sagi signals a fundamental shift. “Modern IAM strategies are shifting toward continuous authentication and contextual intelligence. Instead of relying on a single login event, they evaluate ongoing behavioral signals, like typing patterns or device usage.”
Brian outlines key features of a deepfake-resilient IAM, noting, “Behavioral patterns, adaptive MFA, and anomaly detection can help, along with continuous authentication and a zero-trust architecture from the get-go.”
“There’s also growing adoption of identity threat detection and response (ITDR), which brings a more proactive and security-oriented approach to managing identity,” Sagi further adds.
How CISOs Can Securely Navigate This Identity Shift
To start with, identity needs to be treated with the importance it deserves. On the mindset shift CISOs must embrace, Sagi says, “CISOs need to recognize that identity is now one of the most actively targeted components of their security infrastructure.” Another critical step is internal alignment and education, as Brian rightly captures, “Train your employees on what to look out for in their day-to-day authentication, in terms of looking out for phishing and anything that could be a deep fake impersonation.”
Sagi further adds, “With deepfakes and social engineering attacks becoming more common — such as scams involving fake video calls — it’s no longer enough to rely on point-in-time authentication. CISOs should pivot to adaptive IAM strategies that monitor trust continuously and respond to anomalies in real time.”
Brian goes on to advise on the usage of unauthorized AI tools: “Be aware of the shadow AI that’s in use in your organization, and ensure it has appropriate safeguards if it’s touching personal customer information. And put appropriate guardrails around how you do identity in your organization, whether it's SSO or authentication.”
Finally, CISOs should actively engage in cross-industry collaboration to stay ahead and advocate for ethical, regulated use of generative AI. Building resilience in this identity shift is about fostering agility, awareness, and a security culture ready for the synthetic age.
Takeaway: A Mix of Caution & Advice
“We’re likely to see a rapid increase in deepfake-driven attacks across everyday business functions, and not just targeting executives. Areas like customer support, finance, and HR are particularly exposed,” warns Sagi. “CISOs should treat the identity layer as a live, high-risk environment and prepare their systems to detect and respond to synthetic threats before attackers exploit the gap.”
Brian says that one big weak spot will be people and employees developing authentication and authorization using AI. “Understand where the data is going and know the benefits and dangers of using AI and vibe coding for implementing authentication and authorization. Getting to standards-based authentication ASAP will help as you implement AI in your engineering processes, because AI has a wealth of guardrails to use.”
For CISOs, this underscores the urgency to evaluate how AI-driven development itself can introduce identity risks. It’s not just about detection; it’s about designing systems that are resilient by default, with guardrails that anticipate synthetic threats from the ground up.