
Last week, Anthropic donated the Model Context Protocol (MCP) to the Linux Foundation's newly created Agentic AI Foundation. OpenAI, Block, Microsoft, Google, AWS, Cloudflare, and Bloomberg are all backing the move—protecting the standard that connects AI agents to enterprise systems by placing stewardship with the same neutral body that oversees protocols like Kubernetes, Node.js, and PyTorch.
Maintaining MCP for the greater good of AI
MCP started as Anthropic's solution to a fundamental challenge: AI assistants and agents are only as useful as the context they can access, and every integration required bespoke connectors. In just one year, MCP has become the de facto standard for AI data connectivity. ChatGPT, Gemini, Microsoft Copilot, and others have adopted it. According to Anthropic, there are now over 10,000 public MCP servers and 97 million SDK downloads per month.
Moving MCP to the Linux Foundation means the protocol will evolve based on community consensus rather than any single company's roadmap or commercial interests. Enterprises can adopt MCP knowing it will remain open and that their investments in MCP-based integrations will work across AI platforms now and into the future.
What the move signals about where AI is heading
Anthropic's decision to donate MCP—and the industry-wide support behind it—tells us something important about the trajectory of enterprise AI:
The integration layer will be open. Just as HTTP became the universal protocol for web communication and SQL became the standard for database queries, MCP is positioning to become the universal protocol for AI-to-data connectivity. The major AI platforms have effectively agreed that competing on integration protocols doesn't serve anyone.
AI platform choice will remain fluid. Organizations won't commit to a single AI vendor for all use cases. Different teams will use different tools. New platforms will emerge. The best enterprises will architect for this reality rather than against it.
Governance becomes the differentiator. When connectivity is standardized, the competitive advantage shifts to how well organizations can govern, secure, and manage those connections at scale.
Architectural principles for an MCP-native future
As MCP matures under neutral stewardship, we can expect similar developments to some of the other neutral open-source technologies that the Linux Foundation stewards. For example, before Kubernetes became the industry standard for container orchestration, deployments were locked into a single cloud provider’s tool – AWS ECS, Google Container Engine, or Azure. When Kubernetes emerged as the standard, organizations that had planned for a vendor-neutral architecture reaped the rewards of portable workloads and hybrid environments without costly rewrites. We can expect the same development in the AI world with MCP – organizations strategically planning ahead today should:
Separate your data layer from your AI layer. The systems your AI agents connect to should be independent from which AI platform runs the agent. This separation means you can adopt new AI tools without rebuilding connectivity, sunset old tools without losing integrations, and maintain consistent governance regardless of which AI platform is making the request.
Centralize credential and permission management. MCP's accessibility is a double-edged sword. It's now trivially easy for developers and business users to spin up connections to company data without IT involvement. This creates the same shadow IT risks we've seen with cloud apps and APIs, but with higher stakes. A unified approach to authentication and authorization across all AI connections will be essential.
Build for portability from day one. Any MCP integration you build today should work with Claude, ChatGPT, Copilot, and platforms that don't exist yet. Avoid implementations that tie your data connectivity to a single AI vendor's specific requirements or extensions.
Plan for a multi-agent future. The agentic AI landscape is evolving rapidly. Organizations will likely run multiple specialized agents across different platforms, all needing access to the same enterprise data. Architectures that assume a single AI entry point will struggle to adapt.
Building the future of AI with open standards
MCP's move to the Linux Foundation is part of a broader pattern: the foundational infrastructure of AI is becoming open and collaborative, even as AI platforms compete fiercely on capabilities. This mirrors the evolution of the internet, cloud computing, and mobile—proprietary applications built on open standards.
For enterprise buyers, maximizing flexibility and preventing vendor lock-in will be critical in the coming year. The AI landscape will look dramatically different in two years, and the organizations best positioned to benefit will be those who build on open protocols and maintain independence in their data architecture today. The ability to bring the same governed data connectivity to any AI agent, assistant, or workflow using MCP — without reconnecting each time — will separate organizations that can move quickly with AI from those stuck in integration debt.
Start building with an independent data platform for AI today
Prevent vendor lock-in with a neutral data foundation
Get the trial