The biggest challenge in deploying AI agents securely isn’t hallucination. It’s identity. Or more specifically, the industry’s impulse to reinvent it.
As enterprises began experimenting with agents in 2023, a new term emerged to describe the challenge: non-human identity (NHI). The idea was simple. If agents can read data, send messages, and take actions, maybe they need identities of their own. But that simple idea has created a tangled set of questions:
Is the identity the user or the agent?
Should the agent have its own seat in your IAM system?
If an agent deletes a file or posts to Slack, who’s accountable?
What began as an effort to secure agents has turned into a category of infrastructure that solves the wrong problem. For this post, we're focusing our perspective on scenarios where an agent is acting on behalf of a human or where there's a human in the loop.
Why agent identity isn’t the real problem
We already have patterns for applications acting on behalf of users. OAuth, SSO, and delegated authorization are mature and widely adopted. Apps request access, users grant permission, and identity remains clear.
The issue isn’t that agents need a new identity model. It’s that we’ve misunderstood what agents are.
How agents function more like applications than users
Agents don’t own data. They don’t authenticate directly. They don’t have an access policy.
They take input, make decisions, and request actions. In this sense, agents are not so different from the apps we’ve been deploying for decades. The only real change is in the reasoning layer. Apps are deterministic; agents are not.
The idea of assigning each agent its own identity might sound secure, but it creates more problems than it solves. Now you have to manage agent-specific credentials, enforce identity policies for non-humans, and rethink your IAM hierarchy. The complexity adds up fast.
What happens when you assign agents their own identity
Treating agents as identities often leads to two outcomes:
Privilege bloat: You give the agent broad access so it can function, but now it can read or modify data users shouldn't see.
IAM sprawl: You create identities for every agent or agent instance, leading to dozens (or hundreds) of accounts that are hard to manage, audit, or deprecate.
In both cases, the outcome is the same: agents gain more power than they need, and security teams lose visibility and control.
Worse, because agents are LLM-driven, they’re inherently less predictable than traditional apps. Giving them excessive permissions only increases the risk.
A quick example of excess access
Imagine an internal support agent that connects to Jira to help employees check on ticket status or create new issues. You give it a service account to access Jira. Everything works fine — until someone prompts it to "find and close all open tickets." It complies. Hundreds of tickets closed in seconds, with no user attribution and no approval step.
The issue wasn’t the prompt. It was that the agent had too much access and no boundaries. An identity-based model gave it a seat at the table — when all it needed was a controlled execution path.
Why agent access should be enforced at execution
The real question isn’t "Who is this agent?" It’s:
"Can this app, acting for this user, perform this action on this resource?"
And the answer shouldn’t come from your identity provider. It should come from your execution layer.
That’s how most modern applications work. A Gmail add-on doesn’t get your credentials. A Slack integration doesn’t impersonate you. They request delegated access, perform specific actions, and rely on standard authorization flows.
Agents should work the same way.
How existing application patterns manage identity correctly
Enterprise apps have solved this before:
Salesforce integrations use OAuth scopes to ensure apps only access what they need.
Slack bots operate using tokens scoped to specific channels or message types.
Google Workspace add-ons act with user-granted permission but never receive the user’s password.
In every case, the app acts on behalf of the user, and access is mediated by permissions — not by assigning new identities to the app.
So why do we treat agents differently?
Because they feel, and sometimes are, autonomous. They generate language. They take initiative. But behind the scenes, they’re just orchestrating workflows. And that makes them apps — not users.
This model isn’t theoretical. The idea that agents should behave like applications — and not have separate identities — has been explored across the ecosystem. Connect AI builds on this foundation by making it practical for enterprise environments, where agents must operate securely, at scale, and with user-level control.
How Connect AI enforces the application model
Connect AI treats agents like what they are: applications. It doesn't assign them identities. It scopes execution using tools like QueryData and ExecuteProcedure, and performs those actions using credentials linked to the user—not the agent.
Agents don’t manage credentials for Salesforce
Models never see access tokens for Jira
Execution happens through secure, auditable APIs
Permissions are defined at the connection level
This keeps identity clear, reduces the surface area for privilege escalation, and eliminates the need to redefine your IAM stack.
If a user only has read access to Salesforce, their agent can only query—not update or delete—records. If a user’s Jira connection doesn’t include permission to transition issues, the agent can’t act beyond that boundary. These policies live in the connection, not the agent, and they’re enforced at execution time.
How Connect AI applies this model in production
Connect AI already supports OAuth- and PAT-based access tied to each user. You configure data-source-specific permissions once, and every agent inherits those rules through tool execution.
And beyond user-based permissions at the data source level, Connect AI supports further control over what agents can see and do. Even though a user's Jira permission allows for delete, their Connect AI account can be restricted to read only. Coming soon, features like ACL enforcement and data virtualization will enable row-, column-, and action-level control over what agents can see and do — without changing the identity model.
This gives you a clean separation of concerns:
Identity belongs to the user
Reasoning belongs to the model
Execution belongs to Connect AI
What developers, security teams, and platform owners should know
For developers: You don't need to solve identity. You just call tools. The user signs in, Connect AI handles permissions, and your agent executes workflows securely. No stored secrets. No scoped-down API tokens. No need to hardcode identity logic into your agent logic.
For security teams: You don’t need to create new identities. You don’t need to assign agents credentials. You manage access at the connection level and let Connect AI enforce the boundary.
For platform owners: You avoid IAM sprawl, reduce risk, and ship faster. Connect AI gives you visibility and control without reinventing policy enforcement.
Use Connect AI to let agents be applications
You don’t need to solve identity again. You just need to enforce it at the right boundary.
Connect AI helps you do that by keeping identity in the user domain and keeping execution in the application domain. That separation is what makes agent deployments scalable, auditable, and secure.
Ready to move past non-human identity? Start building agents on Connect AI — no new IAM constructs required.
Explore CData Connect AI today
See how Connect AI excels at streamlining business processes for real-time insights.
Get the trial