Gartner Summit Recap Part 1: What AI Strategies Miss About Data

by Danielle Bingham | June 5, 2025

Gartner Recap 1

At this year’s Gartner Data and Analytics Summit in London, one message came through loud and clear: AI cannot deliver value unless and until your data is ready. And most organizations aren’t ready.

Across three standout sessions, Gartner analysts tackled the same underlying challenge from different angles: How to prepare your data and your teams to support real-world artificial intelligence (AI). Key themes include insights from Gartner experts on metadata, observability, and the operational glue that holds it all together:

Mark Beyer, Distinguished VP Analyst, explores what AI-ready data really means and why most data isn’t.

Sue Waite, Senior Director Analyst, describes the tools that prepare data for AI-readiness: metadata, quality, and observability.

Afraz Jaffri, Senior Director Analyst, explains the critical importance of team alignment, pipelines, and production velocity.

Together, these sessions offer a throughline: Success with AI requires more than AI models themselves; you need metadata, shared context, and getting your house in order before racing to production.

What does “AI-ready” actually mean?

Most organizations assume their data is AI-ready if it’s accurate, complete, and governed. But as Mark Beyer laid out in his session, AI-Ready Data, that assumption rarely holds up. AI readiness isn’t as simple as ticking static boxes in a list; it’s a fast-moving target, shaped by context, model requirements, and use case.

The real starting point is acknowledging the reality that most enterprise data isn’t built for AI. Operational systems are intentionally sparse, optimized for straightforward transactions, not insight. Even high-quality data might lack the attributes, structure, or scale needed for AI models to do their job. Clean data alone doesn’t matter—it’s whether it fits the specific AI model and outcome you're aiming for.

Beyer frames the solution around three core principles:

Alignment: Data must match the needs of the use case and AI technique. What works for forecasting may not work for generative or classification tasks.

Continuous qualification: Data quality isn’t static. It needs to be tracked over time—using model-specific thresholds—to detect drift, gaps, or unintended shifts.

Contextual governance: Governance isn’t about universal rules; it’s about understanding who’s using the data, how, and in what environment.

He also introduces two frameworks to help teams think more clearly about AI readiness:

  • AI model cards spell out exactly what an AI model expects from the data: structure, distribution, minimum volume, and confidence tolerances.
  • The checksum model is a practical method for building readiness checks directly into pipelines, ensuring the data hitting your model still meets those expectations.

AI builds up from details; It only knows what is available to it. That means even well-governed operational data may be of limited use. These systems are designed to capture inflection points, not the complete picture an AI model may need. Making your data architecture perfect does not mean it's automatically AI-ready.

Taken together, the message is clear: What matters most isn’t perfection. It’s intentionality and context. AI success starts with understanding your data’s context and constraints. You can’t assume it's ready just because it passed a traditional quality check.

Building the foundation: Metadata, quality, and visibility

According to Gartner’s Sue Waite, more than half of AI projects never make it to production, and nearly 40% fail due to data issues. The reasons are predictable but persistent: scattered data, poor quality, limited accessibility, and opaque governance. Waite’s session, Build a Foundation for BI and AI Success, focused on how metadata, quality, and observability tools can close these gaps—and make data truly usable for AI.

Adopting an AI strategy with scattered, mismanaged data costs more than mere dollars. The process will take longer, take up more human hours, and make compliance and governance requirements susceptible to violations.  

The risks aren’t theoretical. She pointed to real-world examples, including Citigroup (fined $136M in 2024 for repeated data governance failures) and Marriott (fined £18.4M for GDPR violations stemming from a years-old breach).

Challenges that undermine business intelligence (BI) and AI initiatives

Availability: Many organizations don’t even know how much data they have or where it lives.

Accessibility and reusability: It’s not enough to find the data—you have to be able to use it, move it, and combine it securely across systems.

Data quality: It’s not always about good versus bad. It’s about detecting changes and trends in your data. Static quality metrics aren’t enough.

Compliance: Organizations are not limited to only following regulatory requirements—they must prove they’re doing so, with transparency and audibility.

Preparation and pipelines: Raw data needs extensive preprocessing, especially when dealing with unstructured sources like documents, images, or voice files.

Waite laid out a practical toolset to address these challenges:

  • Active metadata, instead of static catalogs, keeps usage information up to date and machine-actionable.
  • Data quality tools, such as profiling, standardization, anomaly detection, and remediation, are table stakes.
  • Unlike data quality tools, data observability platforms don’t modify data; they monitor it. This includes pipeline health, infrastructure usage, lineage, and near-real-time anomaly alerts.

These capabilities are reshaping the market. Waite highlighted how vendors now position their offerings as “data and analytics governance platforms” or “unified data management platforms,” reflecting how quality, metadata, and observability are becoming inseparable.

The lesson: Tools alone won’t fix messy data, but modernizing your foundation is the only way to scale AI responsibly.

Formula 1-style data engineering: Coordination, efficiency, and trust

Afraz Jaffri opened his session, Scale AI from Pilot to Production with AI Engineering, with an intriguing analogy based on aspects of Formula 1 racing. When a car comes up to a pit stop, there are already a dozen or more people poised to do their jobs. They know exactly what’s expected of them, and they execute their tasks with flawless efficiency. He believes that AI engineering should operate in the same way: coordinated, efficient, and built on trust. But for most organizations, getting from prototype to production is anything but.

Gartner data shows the average time to move an AI model into production is eight months. Only seven percent of organizations can do it in under three months. Jaffri explained that mindset shifts, along with tools, teams, and processes, can shrink that gap.

His definition of AI engineering is broad: it combines DataOps, MLOps, and DevOps, all unified by governance. But he stressed that success isn’t defined by methodology—it’s breaking down silos. He observed that this is more of a cultural issue. Instead of collaborating, many teams are accustomed to doing what they need without interacting with other teams.

Insightful takeaways

Team structures matter: Cross-functional teams, supported by centralized platform teams, are becoming the default. But structure alone isn’t enough—you also need shared vocabulary and mutual respect across roles.

Build minimum viable models: Rather than waiting to polish a perfect prototype, teams should aim to move small, testable models through dev, test, and production in tight loops. This reduces handoffs and surfaces issues early.

Use the right agile methods: Kanban works well during exploratory development. Scrum fits the later-stage integration and deployment phase.

Think in pipelines: Pipelines are now able to modularize code, allowing bottlenecks to be easily identified. Without them, scaling to hundreds or thousands of models becomes impossible.

Automate aggressively: From testing and integration to model registration and deployment, CI/CD is key. The goal is to detect drift, trigger retraining, and release updates with minimal manual effort.

Measure everything: Jaffri advocates for value stream mapping—breaking down how long each step in the AI lifecycle actually takes. Even small delays (waiting on data access, approvals, or resources) add up quickly.

His closing advice? Run internal workshops. Review recent model deployments. Identify bottlenecks, track lead times, and reduce handoffs. Like a Formula 1 team, AI success comes from relentlessly optimizing the whole system—not just the parts.

Final thoughts

The overarching tone of these sessions should give organizations pause: Ignore the shiny solution-of-the-month selling point; focus on the practical realities of making AI work. Understand your data, align your teams, and build the infrastructure to get your data AI-ready. It’s not glamorous work, but it’s the kind that determines whether your AI strategy actually delivers.