Enterprise AI agents are moving quickly from experimentation to production. Tools like Microsoft Copilot Studio make it easy for teams to build agents that reason over internal systems, retrieve enterprise data, and take action with minimal development effort. That accessibility is powerful, but it also introduces a subtle and increasingly common security mistake.
A recent Tenable article demonstrated how a simple prompt injection could cause a Copilot Studio agent to expose sensitive data it was never intended to share. The example involved a customer-facing agent with access to internal SharePoint content, including credit card data and instructions not to inappropriately share data from other customers. That specific setup feels unlikely for most production environments, and it is reasonable to question how representative it is.
Still, the example succeeds in highlighting a deeper issue that applies well beyond the scenario Tenable chose. The problem is not SharePoint or credit cards. The problem is relying on a language model as a security boundary.
You cannot rely on models for security
As organizations rush to build enterprise AI agents, many are implicitly asking models to take on responsibilities they were never designed to handle. Instructions are added to prompts to limit what the agent should retrieve or which actions it should perform. Guardrails are expressed in natural language, with the expectation that the model will consistently enforce them.
That expectation does not hold up in practice.
Prompts define user experience, not access control. Prompt injection is trivial, and models are optimized to be helpful and flexible. If an agent can see data, a sufficiently clever prompt will eventually find a way to ask for it. This is why prompt injection is not just a model problem, but an enterprise AI security architecture problem. The model is not being malicious. It is simply operating within the authority it has been given.
This is what the Tenable research demonstrates. The agent had legitimate access to sensitive data at the platform level. The only thing preventing exposure was an instruction. Once that instruction was bypassed, the model behaved exactly as designed.
Why the Tenable scenario feels unlikely but still matters
The specific scenario Tenable used, a customer agent with access to internal SharePoint data, is not how most enterprises would architect a real deployment. Customer-facing agents are typically isolated from internal collaboration systems, and sensitive data is usually segmented accordingly.
The academic nature of the scenario does not devalue the findings, however.
The value of the example lies in its execution model. The agent operated with an identity that had broad access, and security depended on the model honoring instructions about what not to reveal. Once an agent is placed in that position, the outcome is predictable.
The model becomes the gatekeeper, and it is not equipped for that role. This same architectural mistake shows up in far more realistic and far more common internal use cases.
A more realistic enterprise failure mode
Consider an internal Sales Copilot built to help sales representatives analyze accounts, review open opportunities, and plan next steps. This pattern shows up repeatedly in internal AI agents used for sales, support, finance, and operations. To accelerate development, the agent is connected to Salesforce using a broadly privileged account. The prompt instructions tell the agent to only return data relevant to the signed-in user’s territory.
On the surface, this feels safe. The agent is internal, and the users are employees.
In reality, every query executes with the same identity. That identity can see all accounts, all opportunities, and potentially many related objects. The model is expected to filter results correctly based on instructions alone. At that point, prompt injection is not the root problem. Over-permissioned identity is.
A user asking an unexpected question, or intentionally probing the agent, can surface data outside their scope. If the agent also has write or delete permissions, the risk escalates quickly from unintended disclosure to unintended action. Relying on the model to limit behavior under those conditions is fundamentally unsafe.
Enforce security at the platform layer
The correct way to think about AI agent security is to focus on identity and execution context, not prompt wording. Security should be managed in a separate platform layer before data ever gets to the AI tool. Every agent action should execute as a specific user identity, not a shared or elevated account.
When Alice asks a question, the agent should only be able to see what Alice is allowed to see. When Bob asks the same question, the agent should see less because Bob’s identity allows less. Native permission models in systems like Salesforce or SharePoint should be enforced automatically.
When source systems do not provide sufficient granularity, the platform connecting the agent to data must compensate. That platform becomes the place where least privilege is enforced, not the model.
This is where managed MCP platforms like CData Connect AI become essential.
Same question, different identities
Now consider the same Sales Copilot built on a managed MCP platform.
The agent is explicitly scoped to work with Accounts and Opportunities and to provide analysis and recommendations. It does not have unrestricted access to the underlying Salesforce schema.
Alice, a system administrator, asks a question about pipeline risk. The agent retrieves data using Alice’s identity, and she sees all accounts and opportunities because her role allows it.
Bob, a sales representative in the West territory, asks the exact same question. The agent retrieves data using Bob’s identity, and he only sees the accounts and opportunities assigned to his territory.
Bob can attempt prompt injection. He can ask the agent to retrieve unrelated objects or sensitive fields. The model may attempt to comply, but access is enforced before the query executes. If Bob’s identity does not allow access, nothing is returned.
Connect AI can uniquely further restrict exposure by limiting the agent to a curated workspace and derived views that only include approved objects. Even if Bob has broader permissions elsewhere, the agent does not.
If Bob asks the agent to update an opportunity and write access has been restricted at the Connect AI layer, the agent responds by recommending that he log into Salesforce directly. Security is enforced before the model is involved.
Four principles for securing enterprise AI agents
Enterprises that deploy AI agents safely tend to follow a consistent set of principles. These principles effect how security has always been enforced successfully in distributed systems.
Do not rely on prompts for access control
Instructions and prompts define how an agent behaves, not what it is allowed to access. Access control must be enforced before the model executes, at the infrastructure or platform layer. If the model can retrieve data, a prompt will eventually find a way to ask for it.
Execute every request as the end-user identity
Every agent action should run in the context of a specific identity, ideally the end user’s identity. That identity determines the maximum authority of the agent. Shared or elevated identities dramatically increase blast radius and make correct enforcement impossible at scale.
Apply least privilege beyond the source system
Native permissions in source systems should be enforced whenever possible. When those systems lack sufficient granularity, additional controls must be applied upstream. Scoping datasets, restricting accessible objects, and limiting allowed actions are all essential to ensuring agents only operate within approved boundaries.
Maintain comprehensive auditability
Every agent interaction should be logged with full context, including who initiated the request, what data was accessed, and what actions were attempted. Auditability is critical not only for compliance, but for understanding and correcting failures when agents behave in unexpected ways.
Together, these principles allow organizations to scale AI agents safely without turning language models into gatekeepers.
Secure AI agents by enforcing identity and access before the model runs
The real lesson from the Tenable research is not that prompt injection exists. It is that AI agents amplify the consequences of identity and permission mistakes that already exist in enterprise systems.
When security is enforced at the infrastructure and platform layer, prompt injection becomes far less interesting. The agent can only access what the executing identity allows, regardless of how the prompt is phrased or how creative the request becomes.
In practice, enforcing these principles consistently requires infrastructure designed for enterprise AI agents, not ad hoc integrations or prompt-level controls. Managed platforms designed for enterprise AI agents, such as CData Connect AI, make this approach practical for enterprise teams. They allow IT to enforce identity-based access control, least privilege, and auditability architecturally, while still enabling developers and citizen integrators to build useful agents quickly and safely.
The safest AI agents are not the ones with the most carefully worded prompts. They are the ones that never see data they should not have access to in the first place. Build secure AI agents today with a free, 14-day trial of CData Connect AI.
Frequently asked questions about prompt injection and enterprise AI agents
What is prompt injection in enterprise AI?
Prompt injection is a technique where a user manipulates an AI agent through natural language to override or bypass its intended instructions. In enterprise environments, the risk arises when agents rely on prompts—rather than enforced permissions—to control access to sensitive systems or data.
Why is prompt injection especially risky for AI agents?
AI agents often connect directly to enterprise systems and can retrieve data or perform actions. If an agent operates under a shared or over-privileged identity, a simple prompt injection can expose data or trigger actions a user is not authorized to perform. The risk stems from excessive agent authority, not the prompt itself.
Can prompt injection be prevented with better prompts?
No. Prompts influence behavior, not access control. Language models are designed to be flexible and helpful, making them unsuitable as security enforcement mechanisms. Preventing prompt injection requires enforcing identity, permissions, and least privilege at the platform or infrastructure layer before execution.
How should enterprises secure AI agents that access internal data?
Enterprises should ensure every agent action executes under the end-user’s identity rather than a shared service account. Native permissions in source systems must be enforced automatically, with additional controls applied where granularity is limited. Managed platforms like CData Connect AI support down-scoped access, action restrictions, and full audit trails.
Do internal AI agents pose the same risks as customer-facing agents?
Yes. Internal AI agents often pose greater risk because they are highly trusted and frequently connected to privileged systems. Without strict identity enforcement and least-privilege controls, internal agents can unintentionally expose or modify sensitive enterprise data.
Explore CData Connect AI today
See how Connect AI excels at streamlining business processes for real-time insights.
Get the trial