What is Context Engineering and Why Does Intelligent Data Connectivity Matter?

by Jerod Johnson | November 4, 2025

Context EngineeringAs more people adopt LLMs and their related tools, one thing has become clear, context matters. We can trace the start to retrieval-augmented generation (RAG), where large language models (LLMs) are able to retrieve information from external knowledge bases (not just their own training datasets) before generating responses. More recently, Anthropic's release of the Model Context Protocol has led to a flurry of activity amongst technophiles and business users alike. Now LLMs have a well-defined way to include live data from myriad external systems in their context.

While these advancements have led to innovations and positions AI initiatives to change and inform the way we all do our day-to-day work, the cost of this context is becoming more visible. In fact, in some cases, it can be more expensive to adopt AI coding agents than to hire human developers. And this is where context engineering comes into play.

What is context engineering?

Andrej Karpathy (formerly of OpenAI & Tesla) gives a pretty concise definition in his post on X. He defines context engineering as "the delicate art and science of filling the context window with just the right information for the next step."

Philipp Schmid (AI Developer Experience, Google Deepmind) offers this definition in his blog post: "Context Engineering is the discipline of designing and building dynamic systems that provides the right information and tools, in the right format, at the right time, to give an LLM everything it needs to accomplish a task."

Context engineering differs from prompt engineering (or purposeful prompting as we've called it), where the user carefully crafts their instructions to the LLM on the front end. Context engineering is all about making sure that the LLM gets the minimal amount of data it needs to answer a question or take action.

To an LLM, context is all the information that it sees before it generates the response. There's a lot to this, including things like system instructions, the user prompt, memory, tools, and often (thanks to MCP) raw or curated data from external systems.

Each of those pieces is worthy of a blog on its own, but for this post, we're going to talk about how CData's data connectivity implementation provides built-in context engineering for you through its unique toolset, semantic intelligence, and data processing capabilities.

CData's foundation in context engineering

Since CData's inception, our focus has been on optimizing connectivity to external data systems. We've already done the engineering work to make every available piece of data in enterprise systems accessible through standard SQL and optimized performance by pushing complex data requests down to be handled by the server. Since CData Connect AI is built on top of this connectivity foundation, we have engineered context optimization into the platform rather than treat it as an afterthought.

Engineering the tool context

Connect AI's database model means LLMs can explore your system's data model, request data, and take action with only 8 tools, regardless of how many business system connections you have configured in Connect AI. With only 8 tools to keep in its context, an LLM can reliably choose the right tool, saving context and memory for actual data and reasoning.

Most MCP servers multiply tools per object and add action-specific tools, causing "tool bloat" across connected systems. Research shows that LLMs hallucinate on tool selection above 40 tools – a limitation Connect AI avoids through its SQL-access model.

Engineering the semantic context

Connect AI provides semantic intelligence to LLMs by connecting to every part of your business system, including custom objects and fields. With this complete picture of your data, LLMs don't waste context searching for or guessing at relationships between objects. Instead, they allocate that context to analyze data and take action.

Most MCP server implementations only support standard objects and fields, forcing LLMs to consume context exploring the data model and building custom requests for unsupported elements. This exploration overhead leaves less context for the analysis and action we need from LLMs. With Connect AI, all that context is built into the data model, so LLMs can focus entirely on analysis and action.

Engineering the data context

CData has long focused on providing efficient access to data and that continues with Connect AI. LLMs often ask complex questions – including joining data across systems, filtering, and aggregating – to get answers. With Connect AI, all of those complexities are either pushed down to the business system or handled by Connect AI. This means that LLMs get pre-processed results, freeing context for reasoning and action.

Other MCP server implementations return raw, unprocessed data to the LLM, using context to process the data, analyze, and act. This leads to, at best, inefficient token usage, or, at worst, compounding hallucinations across processing, analysis, and action as LLMs navigate multiple decision points with limited context.

Optimize your AI initiatives with Connect AI

Strong context engineering practices separate efficient AI initiatives from expensive ones. With Connect AI's three-layer approach, you can be sure that your LLM has exactly the context it needs. By providing live connectivity to over 350 enterprise data sources and built-in context engineering through minimal tools, semantic intelligence, and optimized responses, Connect AI gives LLMs more context for analysis and action on your live business systems.

Try it for yourself with a free trial of Connect AI.

Explore CData Connect AI today

See how Connect AI excels at streamlining business processes for real-time insights.

Get the trial