
AI agents are revolutionizing how we work with data, but the bridge connecting AI to your business systems—Model Context Protocol (MCP)—might be leaving your organization exposed.
MCP is often described as the "USB-C for AI" because it standardizes how large language models (LLMs) and AI agents connect to external tools and data sources. But while its convenience and interoperability are fueling rapid adoption, MCP security has not kept pace. In fact, many organizations are already deploying MCP-powered workflows without understanding the risks.
In this article, we explore the security concerns identified in recent research, the challenges in implementing secure MCP systems, and what organizations must do to protect their data and infrastructure in this emerging ecosystem.
The current security landscape
Recent security research paints a concerning picture. As MCP grows in popularity, so too, does the number of experimental and insecure deployments.
Implementation vulnerabilities
A comprehensive study of open-source MCP servers uncovered widespread flaws (PromptHub):
43% of servers suffer from command injection vulnerabilities
33% allow unrestricted URL fetches, enabling SSRF-style attacks
22% leak files outside their intended directories
Two recent CVEs highlight the severity of these flaws:
CVE-2025-6514 (CVSS 9.6): Remote code execution in the mcp-remote server (JFrog Security Blog)
CVE-2025-49596 (CVSS 9.4): RCE vulnerability in the browser-based "MCP Inspector" tool used in development workflows (Oligo Security Blog)
Protocol-level security gaps
Security risks are not limited to implementations. MCP’s protocol design creates opportunities for misuse:
Nearly 2,000 MCP servers are publicly discoverable, many deployed without authentication (TrollySec)
While OAuth 2.1 is recommended for authorization, MCP leaves actual data protocol security to the implementer (Christian Posta)
This creates an outsized burden on developers to get security right at every layer (AppSec Engineer)
Authentication and authorization challenges
MCP introduces novel challenges in securing AI-driven interactions:
AI agents often behave non-deterministically, complicating deterministic authentication workflows
Prompt injection and hidden tool instructions can trick agents into exposing secrets or leaking credentials
Mapping access tokens across systems requires complex configuration and can easily lead to over-permissioned access
Enterprise vs. experimental: The trust divide
The MCP ecosystem is dominated by experimental projects published anonymously or semi-anonymously on open marketplaces. While these tools demonstrate innovation, they often lack even basic security controls. Many reintroduce issues that have long been considered solved in enterprise development, such as hardcoded credentials, shell command execution from user input, and lack of input sanitization.
The risk grows when agents begin to chain together tools with varying privilege levels. Without clear boundaries, an agent might use a trusted tool to elevate the privileges of a less trusted one, creating an exploit chain.
For enterprises, this experimental culture is a non-starter. They require:
Audited, trusted implementations from known vendors
Enterprise-grade support and incident response
Security models that align with existing infrastructure
A clear separation between authentication protocols and data access logic
Different approaches to MCP security
Several companies are already pioneering best practices for securing MCP in production environments.
1Password establishes clear boundaries and credential safety
1Password refuses to expose raw credentials to AI agents via MCP (1Password Blog). Instead, they:
Inject credentials on behalf of agents without passing them directly
Use short-lived, scoped tokens when credentials must be delivered
Restrict MCP to low-risk, high-value interactions, like accessing read-only metadata
Their approach is rooted in the recognition that non-deterministic agent behavior and high-sensitivity credentials do not mix.
Epic AI uses just-in-time authentication for safer exploration
Epic AI uses a dynamic authorization model where users can explore public tools before logging in, with authentication enabling access to sensitive capabilities (Epic AI). Key principles include:
OTP-based email authentication to reduce friction
Separation between public and private tool access
Familiar workflows that mirror real-world expectations without overexposing data
This progressive approach balances usability with safety by granting access based on user context and trust level.
Security best practices for MCP implementation
Securing MCP requires a defense-in-depth strategy spanning authentication, development, and deployment.
Authentication and authorization
Implement OAuth 2.1 with PKCE to protect authorization flows
Use trusted identity providers rather than bespoke login systems
Enforce least privilege for each agent or tool interaction
Manage token lifecycles: rotate, revoke, and scope them properly
Development security
Conduct code audits and penetration testing before deployment
Validate and sanitize all inputs, especially if invoking shell commands or fetching URLs
Use allowlists for permitted operations and deny everything else
Maintain comprehensive logs of AI and tool interactions
Deployment security
Run MCP servers on hardened, trusted infrastructure
Use network isolation (VPCs, firewalls) to restrict access
Patch and update servers regularly
Develop and test incident response plans for MCP-related breaches
Secure deployment is not optional
Model Context Protocol is a powerful innovation. It opens the door for AI agents to interact meaningfully with enterprise systems. But its power comes with real, present risks. Security incidents are no longer hypothetical. They are already being discovered in the wild.
Most existing MCP servers are experimental projects. They lack even baseline enterprise-grade protections. Even the MCP specification itself requires careful implementation and supplemental security to be production ready.
Organizations looking to adopt MCP must:
Vet servers and tools for security, provenance, and supportability
Avoid unknown or unaudited MCP implementations
Demand enterprise-grade security controls, auditability, and incident response
Continue to enforce existing governance and access control across AI workflows
Talk with your data securely with CData MCP Servers
Understanding the risks is only the first step. In our next article, we explore how to implement secure, enterprise-ready MCP solutions—and how CData MCP Servers can help you safely connect AI to your most sensitive systems.
Try CData MCP Servers Beta
As AI moves toward more contextual intelligence, CData MCP Servers can bridge the gap between your AI and business data.
Try the beta