Philip Schmid (AI Developer Experience at Google DeepMind) recently reflected on the challenges and patterns emerging from his experience building with model context protocol (MCP) servers in a thoughtful post titled "One Month in MCP" (reddit.com, x.com). The post sparked a meaningful discussion among developers experimenting with tool-augmented agents. When I read the post, I was struck by how CData's approach seemed to already address the challenges and patterns Philip observed and I thought his insights (and those of the community) deserved a deeper look.
At CData, we released our first MCP Servers on May 1, 2025, offering both installable MCP Servers (using stdio) and a cloud-hosted MCP Server as part of CData Connect AI (using streamable HTTP). So when we read Philip’s post, we found ourselves nodding along. His observations mirror many of the early tradeoffs we’ve encountered, and they validate the architectural decisions we’ve made as our MCP offering has matured.
We’re sharing our perspective to extend the conversation and offer practical insight into how to scale from solo experimentation to team-ready, production-grade tool environments.
stdio is great—until it isn’t
Philip is right: stdio feels simple and direct. It’s a great way to get started with tools, especially when you want to interact with the local environment or debug tool behavior. That’s why our installable MCP Servers use stdio by default.
But stdio has limits. If you’re spending more time restarting processes and syncing tool state than building functionality, it’s time to level up. We’ve seen firsthand that for long-running workflows, agent collaboration, or cloud-based access, remote connections provide a smoother and more stable experience.
CData Connect AI offers a remote MCP Server over streamable HTTP. It lets teams skip the manual setup and focus on what matters: invoking tools, not babysitting processes.
Going beyond local-first with remote MCP
Local setups work well for personal projects. But as soon as you start sharing tools, working across environments, or scaling to more users, local-first starts to crack.
Connect AI is designed for remote-first development. Users can access shared MCP tools instantly—no Git clone, no package installation, no API key leakage. Just secure, authenticated, cloud-hosted tools that are versioned, curated, and ready for collaboration.
A shared set of tools across all MCP Servers
Philip raises an important issue around tool collisions and naming inconsistencies. At CData, we take a different approach: all of our MCP Servers expose a consistent set of tools, grounded in the same SQL-based abstraction that powers our drivers and Connect AI.
These tools are designed to reflect CData's database-like access layer, which aligns with how most LLMs are trained to interact with data (on databases). The tools remain consistent across both local (stdio) and remote (HTTP) MCP Servers:
queryData: Execute SQL queries against connected data sources and retrieve results.
execData: Perform actions against a data source based on the stored procedures discovered using getProcedures.
getCatalogs: Retrieve a list of available connections in CData Connect AI. Each connection acts as a catalog in other tools and queries.
getSchemas: Retrieve a list of available APIs or data models within a specific connection (catalog).
getTables: Retrieve a list of available objects, entities, or collections within a given API or model.
getColumns: Retrieve a list of fields, dimensions, or measures for a specific table, object, or collection.
getProcedures: Retrieve a list of available actions (stored procedures) for a given API or model—such as sending an email, changing an issue status, downloading a file, and more.
This standardization makes it easier to predict tool behavior, reduce confusion, and build workflows that scale across models and environments.
Scaling access through tool filtering
Too many tools? Too much context? Philip calls it a hard bottleneck, and he’s right.
Connect AI supports per-agent tool filtering to manage tool exposure. This allows users to curate the tools available to each LLM agent, minimizing overload and improving performance. We're also exploring retrieval-based methods and custom tools to further enhance the tool experience.
Schemas are driven by data, not the model
Rather than tailoring schemas to individual LLMs, CData MCP leverages the same schema generation logic used in our drivers and our cloud-based connectors for analytics. That means the structure of each tool is informed directly by the underlying data source, not by the behavior of any particular model.
This gives users clarity and consistency, but it also means developers should consider testing their tool usage across models like GPT, Claude, or open-source LLMs to ensure compatibility.
Start scaling with CData MCP Servers
Philip’s post is a great checkpoint for where the MCP ecosystem is today. The lessons are clear: avoid over-reliance on stdio, move toward remote-first access, curate your toolset, and understand how schema complexity may impact model behavior.
At CData, we’ve designed our on-premises MCP Servers and Connect AI to help you take those next steps. Whether you're testing tools locally or orchestrating agents across teams, CData's connectivity is built to scale with you.
Try CData Connect AI to explore our remote MCP capabilities or download a local MCP Server to get started fast.
Explore CData Connect AI today
See how Connect AI excels at streamlining business processes for real-time insights.
Get the trial