Flipping the Enterprise AI Stack with SQL: The Surprising Key to Useful AI Agents

by Marie Forshaw | June 19, 2025

AI-Agents.png

How CData and MCP help AI speak the language it already knows.

Agentic AI is coming. Unlike chatbots that merely talk, these new agents *act*. They promise to handle complex business tasks independently, working across different software systems. Think of an AI that doesn't just suggest a sales strategy but updates your CRM, checks inventory, and emails the relevant team members.

This sounds great. But there's a hitch. For AI agents to act usefully, they need access to company data. This data lives scattered across dozens of systems – Salesforce, SAP, databases, and cloud apps. Getting AI to talk to all these systems reliably is the main challenge.

Many assume the answer involves complex custom programming for each system's unique API or perhaps moving vast amounts of data into special warehouses. These paths are slow, expensive, and often insecure.

There is a more straightforward, more effective way. It relies on a language AI already understands well: SQL. This article explains how an innovative approach using SQL, combined with tools like CData connectors and the Model Context Protocol (MCP), provides a solid foundation for AI agents that truly work.

Turning the software stack upside down

For years, business software meant separate applications. You logged into your CRM, then your ERP, then your email. You clicked through screens, copied information, and did the work yourself. The software dictated how you worked.

Agentic AI flips this model. The AI agent becomes the main interface. You tell the agent your goal in plain language: "Find customers in the Northeast who haven't ordered in six months and draft a follow-up email." The agent identifies the steps and interacts with the necessary systems behind the scenes.

This "flipped stack" puts the user's goal first. But it only works if the agent can easily and securely get the data it needs from all those underlying systems.

Why AI understands SQL

Why SQL? Isn't that an old database language? Yes, but it's a language AI models, known as Large Language Models (LLMs), know surprisingly well.

LLMs learn by reading massive amounts of text and code from the internet. SQL appears everywhere in this training data – in technical documents, code examples, and online forums. As a result, LLMs develop a strong grasp of SQL's structure and rules.

Contrast this with APIs. Every software system has its unique API, a specific set of commands for interacting with it. There are thousands of different APIs, each with its quirks. Teaching an LLM to use hundreds of different APIs reliably is complex and error-prone. It's like instantly asking someone to become fluent in hundreds of obscure dialects.

SQL, however, is standardized. It's logical language for asking questions about structured data. Because LLMs are good at recognizing patterns and structure, and because they've seen so much SQL during training, generating SQL is a task they are naturally suited for. Using SQL plays to the AI's strengths.

Building the right connections: CData and MCP

So, AI understands SQL. But your Salesforce or SAP system understands APIs. How do you bridge this gap?

This requires two key pieces of architecture:

  1. CData Connectors: Think of these as universal translators. CData offers connectors for over 300 business systems. Each connector makes a specific system (like Salesforce) look like a standard SQL database to the outside world. The connector handles the messy details of translating SQL commands into the correct API calls for that specific system.
  2. Model Context Protocol (MCP): This is a secure way for the LLM (the AI agent's "brain") to use external tools like CData. MCP allows the agent to securely send a SQL query to the CData connector and get the results back when the agent needs data.

Together, CData and MCP create a vital layer in the new agentic architecture. The user talks to the agent. The agent's LLM figures out what data is needed and writes a SQL query. MCP securely sends the query to the CData connector. CData translates the SQL to API calls, gets the data from the target system, and sends it back. This setup lets the LLM focus on reasoning, while CData handles the complex job of talking to all the different systems.

Getting information (read operations)

This SQL-based architecture makes it much easier for agents to retrieve information:

  • Simpler Queries: LLMs can generate standard SQL, which is more reliable than generating unique API code for each system.
  • Better Guidance: CData provides information about each system's data structure (tables, columns). Giving this information to the LLM helps it write more accurate SQL queries. It's like giving a translator a dictionary for the specific topic.
  • Less Training Needed: Because the AI already understands SQL, less specialized training (fine-tuning) is needed than teaching it hundreds of APIs.

This allows agents to answer complex questions by pulling and combining live data from multiple sources, all triggered by a simple user request.

Taking action (write operations)

Agentic AI needs to do more than just read data; it needs to *act* – update records, create entries, delete information. CData's SQL approach provides a secure way to enable these actions:

  • SQL for Actions: CData translates SQL commands like `INSERT`, `UPDATE`, and `DELETE` into the corresponding API calls needed to change data in the source system.
  • Security First: This is critical. When an agent uses CData via MCP, it does so using the *user's* permissions. The CData connector uses the user's credentials when talking to the target system. This means the agent can only do what the user is allowed to do. It inherits the user's security limits automatically.
  • Safer Actions: LLMs can generate SQL for these actions. Because SQL is structured, it's easier to build in checks and confirmations before executing commands that change data, compared to validating unpredictable API calls.
  • Clear Audit Trail: Every SQL command run through CData leaves a clear record, making it easy to track what the agent did.

This avoids the major security risks of letting AI agents run with overly broad permissions or interact directly with APIs without strict user-based controls.

A simpler way to work: Speak the language you already know

This new architecture changes how AI agents work with data. Instead of wrestling with multiple software interfaces, you have a conversation with an AI agent. You state your goal. The agent uses its reasoning, powered by the LLM, translates your goal into SQL queries, and uses CData and MCP to interact securely with the necessary systems. The underlying SQL engine does the heavy lifting, invisibly connecting your request to the enterprise data landscape.

Conclusion: Flip the stack – Build on SQL with CData and MCP

Agentic AI is pushing companies to rethink their software architecture. The old way, centered on separate applications, won't support agents that need to act across systems. The most effective path forward is to embrace SQL as the language AI uses to interact with business data.

CData provides the universal SQL interface, translating between AI and the complex world of APIs. MCP provides a secure communication channel. This combination leverages the AI's existing strengths, simplifies development, and builds in security from the start. It provides a practical, robust foundation for the next generation of AI agents that can truly act on your behalf.

Try CData MCP Servers Beta

As AI moves toward more contextual intelligence, CData MCP Servers can bridge the gap between your AI and business data.

Try the beta