CData Foundations Recap with Google and ServiceNow: Building AI on Trusted Data

by Andrew Petersen | October 2, 2025

Foundations RecapAt this year’s CData Foundations event, the keynote sessions made one point clear: enterprise AI is moving into its operational era and needs a solid data foundation.

AI hype abounds, but flashy demos run on empty promises. Meaningful business AI needs accessible, connected data infrastructure, built on context, governance, and trust.

Speakers from CData were joined by leaders from ServiceNow and Google to explore what it takes to bring AI into production.

Four themes emerged as the foundation:

  1. Trusted data

  2. Semantic consistency

  3. Strong governance

  4. Connected data as core AI architecture

Trusted, governed data is the bedrock of enterprise AI

CData founder and CEO Amit Sharma opened with a truth that set the tone for the event: AI can only scale on top of trusted, governed data.

The accessibility of AI doesn’t matter if the information it draws from is fractured or unreliable. CData connectors, available across its platform, are designed to provide precisely that reliable depth.

“That's a big question, especially today in this world where people are talking about, ‘I can probably build some sort of a connector with AI.’ And you can go and ask AI to write some code, and you might get some semblance of a connector. But the way we have built the connectors is very unique. There's deep engineering in the sense that we have broken up the components of a connector into its core foundational pieces.” — Amit Sharma, Founder/CEO, CData 


The semantic layer is the connective tissue across systems and users

Keynote speakers emphasized that without semantic consistency, enterprises are stuck reconciling metrics and definitions across systems. This inconsistency slows analytics and erodes trust.

Sisense, which embeds CData connectivity in its own analytics platform, highlighted the problem as a maturity gap. Daniel Schaefer explained:

“If you have a holistic data model, you can put an AI on top of it and let people chat with their data. But the AI lacks the context. Field names and data types aren't enough anymore. You have to be able to provide robust descriptions of, oh, this is my revenue column. Use it for this. You have to enrich the semantic level. These LLMs are kinda useless without context.” — Daniel Schaefer, Principal Solutions Engineer, Sisense 

That customer perspective underscores why semantics matter. By exposing the relationships and context that already exist in enterprise systems, CData enables a consistent view of data that both analysts and AI agents can trust.


Michael Docteroff of Argano offered a vision of what solving that gap looks like:

“What excites me most is that we’re finally, in my opinion, breaking down the barriers between data, analytics, and AI. Most of our clients have always used their analytics in a kind of a rear-view mirror, looking back to the last quarter or last year. Now data and machine learning and artificial intelligence are allowing clients of ours to look into the future to see what they could be doing from an inventory perspective.” — Michael Docteroff, Senior Director of Delivery for Data and Analytics, Argano


Together, these perspectives highlight why semantics matter. Breaking down inconsistent definitions saves analysts time and sets the stage for AI to move from hindsight to foresight.

Governance, security, and observability are non-negotiable for AI agents

If the last year was about experimentation, the next is about accountability. Enterprises need agents that are safe, auditable, and compliant, not just clever. That means policies, guardrails, and observability must surround every AI action.

Without this foundation, even the most advanced models introduce risk instead of value. ServiceNow’s session brought weight to this message.

“Don't wait for your data to be perfect. If you do that, your competitors will just go ahead. Start with data which is in a decent shape and have GenAI use cases on top of that. The process of enabling GenAI use cases will improve your data itself. It becomes a virtuous loop of improving your data quality.” — Sarab Narang, VP Generative AI, ServiceNow 

This was a particularly emphatic moment from the keynote stage: governance is not optional if AI is going to be trusted with business-critical processes. But neither should proper governance discourage or delay enterprise AI adoption.


Data connectivity is the infrastructure for scalable AI

AI doesn’t scale on model innovation alone. The infrastructure that determines whether agents succeed or stall is connectivity, the ability to reliably reach enterprise systems in real time. Without it, even the best frameworks are left working with stale or incomplete data.

As Google’s Philip Stephens explained:

“Data is absolutely the lifeblood of agents actually being helpful for your enterprise. And so, having the right connections, the right fidelity, the right security, the right compliance around your data is all critical.” — Philip Stephens, Senior Staff Software Engineer, Google 


This point reframes connectivity as a core pillar of AI infrastructure, not a back-office function. Model Context Protocol (MCP) provides the standard that lets agents access enterprise data reliably, and CData extends that standard with enterprise-grade connectors.

Pragmatism matters here: enterprises don’t need to rebuild entire data estates before they can experiment. By connecting to data in place, they can start small, move quickly, and then scale as results prove out.

CData’s platform: connectors, semantic layer, and Connect AI

The keynote themes — trusted data, semantic consistency, governance, and interoperability — all point to the same need: a product foundation that makes them real. That’s what CData has built.

It started with connectors: purpose-built drivers that gave enterprises governed, high-performance access to SaaS and enterprise applications. Over time, those connectors have become the most reliable library in the market, now embedded by companies like Google, SAP, and Palantir.

From there, CData expanded into a full integration platform with governance, security, and observability built in. At its center is a universal semantic layer that tackles one of the hardest problems enterprises face.

Connect AI extends CData’s enterprise-grade connectivity to offer the first managed MCP platform. It exposes the metadata, relationships, and context of enterprise systems, giving AI frameworks a real understanding of the business without having to recreate it on top of a warehouse.

“We launched our MCP offering earlier this year. Many companies have tried to build MCP servers with code-gen and AI, but those connectors can’t stand the enterprise workload. The fact that we’ve built these over decades and hardened them over multiple use cases gives us the ability to build a product that is unique in the market.” — Amit Sharma, CEO, CData

Connectors, system-level semantics, and Connect AI now come together as the CData platform. This cohesion is a direct response to the challenges raised on the keynote stage and provides the data foundation that enterprises can rely on as they bring AI into production.

Enterprises don’t need to wait for the perfect data architecture before putting AI to work. The keynotes made clear that progress comes from connecting enterprise data now, even if systems are fragmented or still evolving.

CData makes that possible by working with data anywhere, giving teams a fast, practical way to start and the confidence to securely operationalize AI. 

More from CData Foundations 2025

Watch the keynotes and all Foundations 2025 sessions on-demand.

Watch now