
There’s an open question among generative AI proponents right now about just how the technology will be a part of the workforce of the future. Will the technology be used primarily to support teams of people to achieve their outcomes more efficiently or effectively, or will the technology serve as an independent actor in and of itself, needing an HR to measure against their success just like employees today. It’s a daunting question, with many technical and societal implications, but today I want to focus on one technical implication: what does identity look like in this future?
Traditionally, identity and access management (IAM) frameworks have been designed with human users in mind, relying on login pages, passwords, multi-factor authentication (if we’re lucky), and static role-based access controls (RBAC). However, the dynamic, non-deterministic nature of AI agents breaks these traditional molds. The challenges extend further into the realm of compliance. When AI agents can operate 24/7, making decisions in milliseconds and interacting with dozens of systems simultaneously, logging and tracing their actions for regulatory frameworks like SOX, HIPAA, and GDPR can feel near impossible. And in the vein of compliance, how do we assert with confidence when an agent was acting in a support role on behalf of a person, and when it acted on its own as part of its programming? To address these questions, new approaches must be designed specifically for the world of AI agents, recognizing their complex identity relationships and the need for distinct identities to ensure transparency and accountability.
To start, I want to call attention to Christian Posta, who put together a great blog Agent Identity with OAuth 2.0: Impersonation or Delegation? diving into the difference between when an agent is acting on behalf of someone (support) and when it’s acting on its own (independent). He concludes that different personas or behavior patterns assigned to AI agents will require a different set of accountability, permissions, and auditing. To address this challenge, there are several innovative approaches in the works today:
Credit to WorkOS and their blog Identity for AI: Who Are Your Agents and What Can They Do? - WorkOS for talking about this in more depth.
Ping Identity talks about these capabilities a bit in their blog AI Agents & IAM: A Digital Trust Dilemma.

Moving forward, it’s clear we need to consider long term implications to traditional ways of accomplishing IAM. Ensure dynamic AI agents have the right access at the right time and for the right reasons in real-time is not a solved for problem (no matter what someone tries to sell you). Understanding not just what an AI agent did, but whether it did so on its own, on behalf of a person, or possibly on behalf of another AI agent altogether will also have major implications on auditing and traceability. I don’t know that the “right” answers are ready yet, but clearly the industry is working on it. As always, if you want to discuss this more or connect with us about helping you achieve this transformation, please reach out to questions@generativesecurity.ai.
Bonus time
As a bonus, if you want to dive deep into the technical implications and how to look at designing these systems today, Ram Ramani and Jeff Lombardo (former colleagues of mine) discussed the implications of trust boundaries and token exchange in Identiverse 2025 - Identity Management for AI Agents - Nobody Knows its Your Bot on the Internet (slides here). Their focus on the struggle with conventional authentication flows, dynamic behavioral accesses, and the need for more work on real time authorizations and guardrails are extremely relevant as we shift from AI acting in a support role and inheriting permissions vs. acting independently and using permissions on the fly.

About the author
Michael Wasielewski is the founder and lead of Generative Security. With 20+ years of experience in networking, security, cloud, and enterprise architecture Michael brings a unique perspective to new technologies. Working on generative AI security for the past 2 years, Michael connects the dots between the organizational, the technical, and the business impacts of generative AI security. Michael looks forward to spending more time golfing, swimming in the ocean, and skydiving... someday.