Your agent just received a prompt: summarize last week's sales calls from Gong, post the recap to Teams, and create a new task in Salesforce for follow-up. Straightforward—until you realize the agent needs access to three different services, and you don’t know what credentials it should use. Should it have the power to delete calls from Gong? Can it broadcast to the entire company on Teams?
These are not just edge cases. They're fundamental authorization challenges that block intelligent agents from moving beyond the prototype stage. And while user login is usually solved by your framework or SDK, the real problem is what happens next: how the agent accesses third-party systems on behalf of a specific user—securely and within scope.
If you're building production-grade agents, you've faced this roadblock. The critical question isn’t "Who is the agent?" It’s: "Can this agent, acting for this user, take this specific action on this specific resource?"
Distinguishing user authentication from agent authorization
To frame the issue clearly, we need to separate two related but distinct concerns:
User authentication is about verifying identity—confirming that a user has access to your agent or app. This is typically managed by your platform using SSO, API keys, or session tokens.
Agent authorization, in contrast, is about what that user’s agent is allowed to do across other systems: pull Salesforce records, post messages to Slack, update issues in Jira, etc. These actions require:
Scoped access to each service
Enforcement of per-user permissions
Protection against misuse, overreach, or prompt injection
This layer of security—the point where the agent interacts with external tools—is where enterprise teams need to focus.
Why "non-human identity" misses the point
There’s been a lot of talk lately about non-human identities (NHIs) for bots and agents. But agents aren’t stand-alone identities. They’re execution environments for user-initiated actions. Treating agents as separate, unique actors misses the real security boundary: they’re acting as extensions of the user.
That means traditional identity models don’t go far enough. Instead of asking “Is this identity allowed access?”, we need to ask:
“Can this agent, acting on behalf of this user, perform this exact action—right now—on this specific resource?”
Why agent auth must happen at the tool layer
The moment an agent calls a tool—say, to create a Salesforce task or to send a Teams message—is when authorization matters most. You need enforcement at the point of action, with full context:
Who is the user (from your auth platform)?
Which agent is acting?
What is the tool trying to do?
What data is being requested?
Two common approaches (and why they fall short)
Approach 1: Service accounts
It’s tempting to assign a generic service account to the agent with blanket access to backend systems. That works for internal batch jobs—but it fails in a multi-user, agent-driven model:
Every user inherits the same permissions
Role-based access is effectively bypassed
Sensitive data may be exposed without authorization
Security teams often flag this approach during review. It creates a compliance gap and limits real-world deployment.
Approach 2: Full user credentials
What if you go the other direction and give the agent the user’s full access token?
In theory, it’s safer: users can only access what they already have permission to. But in practice, it’s risky. Agents shouldn’t inherit all the power of the user without guardrails. A single hallucinated deletion, or a prompt injection attack, could have outsized consequences.
That’s why security-conscious teams look for a third option: scoped, user-specific access that’s enforced per action, not per session.
Connect AI: Enforcing safe agent access through connection-level permissions
With platform-level users on top of user-based connections to your business data, Connect AI provides the additional layer of governance that IT teams need to provide this third option. To understand how this governance works, we should understand how Connect AI's MCP Server supports reading, writing, and acting on business services.
Connect AI exposes a consistent set of generalized tools across all services—whether you’re accessing structured databases, Salesforce records, Jira issues, or cloud APIs. These tools act like database primitives and include operations like:
QueryData: fetch records or documents
ExecuteProcedure: perform actions like updating a ticket or sending an email
GetSchemas, GetTables, GetColumns: explore the structure of the underlying service
While these tools are always available to agents, the actual capabilities are constrained by the user’s connection-level permissions within Connect AI.
For example, a Salesforce user may have full permissions in the native Salesforce UI—but their Connect AI credentials may only allow read access. Meanwhile, that same user might have read/write access to Jira.
This approach limits agent behavior without requiring dynamic OAuth scopes or granular ACLs.
With CData Connect AI, authorization is enforced at tool execution time. That means each request is checked against user permissions at the platform level and scoped credentials at the service level before any action is taken.
Example: Querying Salesforce with LangChain + Connect AI
Let’s walk through a real-world example where your agent needs to query Salesforce data. We’ll use LangChain as the SDK and CData Connect AI as the MCP backend. The auth complexity is handled entirely by Connect AI.
Step 1: Configure Salesforce in Connect AI
In the Connect AI platform, go to Sources → + Add Connection, and select Salesforce.
Complete the OAuth flow to connect to Salesforce.
Under the connection’s Permissions tab, configure which Salesforce objects each user is allowed to access and which operations each user is allowed to perform.
Generate a Personal Access Token (PAT) for your LangChain integration.
Step 2: Set up LangChain + MCP client
First, create a class to store the MCP authentication configuration. Note that we're using a Connect AI username and PAT.
config.py
class Config:
MCP_BASE_URL = "https://mcp.cloud.cdata.com/mcp"
MCP_AUTH = "base64encoded([email protected]:PAT)"
Next, create your agent. This agent asks the simple question, "How many Leads do we have in Salesforce with status = 'Open'?"
langchain_agent.py
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from config import Config
async def main():
mcp_client = MultiServerMCPClient(
connections={
"default": {
"transport": "streamable_http",
"url": Config.MCP_BASE_URL,
"headers": {"Authorization": f"Basic {Config.MCP_AUTH}"}
}
}
)
all_tools = await mcp_client.get_tools()
print("Discovered tools:", [tool.name for tool in all_tools])
llm = ChatOpenAI(
model="gpt-4o",
temperature=0.2,
api_key="YOUR_OPENAI_API_KEY"
)
agent = create_react_agent(llm, all_tools)
user_prompt = "How many Leads do we have in Salesforce with status = 'Open'?"
print("User prompt:", user_prompt)
response = await agent.ainvoke({
"messages": [{"role": "user", "content": user_prompt}]
})
final_resp = response["messages"][-1].content
print("Agent final response:", final_resp)
if __name__ == "__main__":
asyncio.run(main())
To run the agent, install the required dependencies.
Install dependencies:
pip install langchain-mcp-adapters langchain-openai langgraph
This agent will:
Connect to the Connect AI MCP server using your PAT
Discover Salesforce tools
Invoke the appropriate tool securely and in real time
Return the results to the user via LangChain
All OAuth handling, credential storage, and connection-level permissions are abstracted away.
The real-world complexity (that Connect AI handles for you)
Connect AI abstracts away the hardest parts of secure agent integration:
OAuth flow orchestration (initiation, redirect, callback handling)
Token lifecycle management (refresh, revoke, storage)
Connection-level permissions (to constrain agent access without changing service-level scopes)
Audit logging (for compliance and monitoring)
Tool abstraction (unified interface across APIs, SaaS apps, and RDBMS)
All this is fully integrated into Connect AI, the first managed MCP (Model Context Protocol) platform. That means if your agent SDK supports MCP, you can delegate tool execution—and all related service integration complexity—to Connect AI.
How it works
Here's how Connect AI handles secure tool access in production:
User authenticates to your agent through SSO or API key
Agent parses the prompt using the ADK and identifies required tools
ADK passes the request to Connect AI for tool execution
Connect AI retrieves user-specific credentials (OAuth or PAT)
Connect AI validates access via the connection’s configured permissions
The result is returned to the agent and the user
All without requiring developers to manage tokens, OAuth flows, or ACL logic manually.
Build secure agents without rebuilding auth infrastructure
CData Connect AI is a secure, governed access layer for agent workflows across live enterprise data. It gives developers a consistent tool interface and leverages connection-level permissions to constrain agent access across Salesforce, Jira, Google Drive, databases, and more.
If your agent SDK supports MCP, you’re already halfway there. Sign up for a free trial and let Connect AI handle the rest.
Explore CData Connect AI today
See how Connect AI excels at streamlining business processes for real-time insights.
Get the trial