There’s an open question among generative AI proponents right now about just how the technology will be a part of the workforce of the future. Will the technology be used primarily to support teams of people to achieve their outcomes more efficiently or effectively, or will the technology serve as an independent actor in and of itself, needing an HR to measure against their success just like employees today. It’s a daunting question, with many technical and societal implications, but today I want to focus on one technical implication: what does identity look like in this future?
Traditionally, identity and access management (IAM) frameworks have been designed with human users in mind, relying on login pages, passwords, multi-factor authentication (if we’re lucky), and static role-based access controls (RBAC). However, the dynamic, non-deterministic nature of AI agents breaks these traditional molds. The challenges extend further into the realm of compliance. When AI agents can operate 24/7, making decisions in milliseconds and interacting with dozens of systems simultaneously, logging and tracing their actions for regulatory frameworks like SOX, HIPAA, and GDPR can feel near impossible. And in the vein of compliance, how do we assert with confidence when an agent was acting in a support role on behalf of a person, and when it acted on its own as part of its programming? To address these questions, new approaches must be designed specifically for the world of AI agents, recognizing their complex identity relationships and the need for distinct identities to ensure transparency and accountability.
To start, I want to call attention to Christian Posta, who put together a great blog Agent Identity with OAuth 2.0: Impersonation or Delegation? diving into the difference between when an agent is acting on behalf of someone (support) and when it’s acting on its own (independent). He concludes that different personas or behavior patterns assigned to AI agents will require a different set of accountability, permissions, and auditing. To address this challenge, there are several innovative approaches in the works today:
- Persona Shadowing: Instead of having agents impersonate users, agents are given their own identity that “shadows” a specific user. This separation ensures that every action is explicitly tied to an AI agent identity, which is constrained by and linked to a delegating human user for accountability.
- Delegation Chains: As AI agents often call other services or even spawn sub-agents to complete complex tasks, maintaining end-to-end trust and context across these delegation chains is crucial. Technologies like JSON Web Tokens (JWTs) passed between services and emerging standards such as User-Managed Access (UMA) and OpenID Connect for Agents (OIDC-A) are vital for preserving the original user’s authorization in a verifiable way.
Credit to WorkOS and their blog Identity for AI: Who Are Your Agents and What Can They Do? – WorkOS for talking about this in more depth.
- Zero Trust and continuous authN and authZ: Traditional static entitlements and roles are insufficient for AI agents that continuously find new ways to complete tasks. Modern IAM strategies will need to emphasize context-aware access controls that dynamically adjust permissions based on understanding the origin of action the AI agent is taking. This includes implementing just-in-time (JIT) authorization, ensuring AI agents have only the necessary permissions, appropriately scoped by autonomy risk, for the duration of their task.
- Strengthened Authentication and Verification for AI Entities: Since AI agents cannot complete traditional multi-factor authentication (MFA), alternative verification methods are critical. These include using ephemeral credentials that expire after a short period, machine identities with constant posture analysis, employing risk-based authentication that dynamically evaluates AI interactions, and leveraging cryptographic proofs.
Ping Identity talks about these capabilities a bit in their blog AI Agents & IAM: A Digital Trust Dilemma.

- Unified Digital Identity Frameworks: The ultimate goal is a unified digital identity system that seamlessly integrates various digital identifiers for both humans and machines. In Digital Identity of your customers: opportunities and responsibilities, Deloitte talks about the importance of governance and unification for driving the future business results, especially in international settings. This is on top of the technical benefits called out by others linked above.
Moving forward, it’s clear we need to consider long term implications to traditional ways of accomplishing IAM. Ensure dynamic AI agents have the right access at the right time and for the right reasons in real-time is not a solved for problem (no matter what someone tries to sell you). Understanding not just what an AI agent did, but whether it did so on its own, on behalf of a person, or possibly on behalf of another AI agent altogether will also have major implications on auditing and traceability. I don’t know that the “right” answers are ready yet, but clearly the industry is working on it. As always, if you want to discuss this more or connect with us about helping you achieve this transformation, please reach out to questions@generativesecurity.ai.
Bonus time
As a bonus, if you want to dive deep into the technical implications and how to look at designing these systems today, Ram Ramani and Jeff Lombardo (former colleagues of mine) discussed the implications of trust boundaries and token exchange in Identiverse 2025 – Identity Management for AI Agents – Nobody Knows its Your Bot on the Internet (slides here). Their focus on the struggle with conventional authentication flows, dynamic behavioral accesses, and the need for more work on real time authorizations and guardrails are extremely relevant as we shift from AI acting in a support role and inheriting permissions vs. acting independently and using permissions on the fly.

About the author
Michael Wasielewski is the founder and lead of Generative Security. With 20+ years of experience in networking, security, cloud, and enterprise architecture Michael brings a unique perspective to new technologies. Working on generative AI security for the past 2 years, Michael connects the dots between the organizational, the technical, and the business impacts of generative AI security. Michael looks forward to spending more time golfing, swimming in the ocean, and skydiving… someday.