Gartner Summit Recap Part 2: Why Data Integration Needs a Mindset Change

by Sue Raiber | June 10, 2025

Gartner_Recap_02.png

Last month at the Gartner Data & Analytics Summit in London, I had the chance to immerse myself in several standout sessions — and as a product marketer of a data integration solution, I deliberately chose sessions that explored the evolving landscape of data integration and management. These weren’t just high-level trend overviews; they offered concrete guidance and fresh perspectives on how to structure integration teams, modernize integration platforms, and align data product delivery more closely with business outcomes.

From best practices in building integration pipelines to rethinking data products and embedding generative AI into integration workflows, each session provided practical insights for anyone tasked with delivering timely, trusted data in complex environments.

Below are my key takeaways from three of the most impactful sessions I attended:

Michele Launi, Senior Principal Analyst, in "Best Practices and Technology Trends to Improve Your Data Integration Maturity," outlines how fusion teams, pipeline prototyping, and platform automation are essential for aligning integration efforts with business value.

Ehtisham Zaidi, VP Analyst, in "Data Products: How You Should Build, Manage and Sustain Them for D&A Success," redefines what makes a data product valuable — emphasizing that only curated, consumption-ready assets with clear ownership and measurable impact truly qualify.

Ramke Ramakrishnan, VP Analyst, in "Future of Data Management Using GenAI," explores how generative AI is becoming embedded in the data stack — enhancing metadata, observability, and automation while reshaping the very foundation of integration platforms.

Fusion teams over silos: Rethinking data integration

In his session, “Best Practices and Technology Trends to Improve Your Data Integration Maturity,” Michele Launi emphasized that successful data integration doesn’t start with technology — it starts with understanding your current maturity level and what outcomes you're aiming to achieve. His point was clear: without that clarity, architecture and tooling choices are premature.

Organizations need to move beyond isolated initiatives and instead foster fusion teams that bring together data engineers, platform experts, and business stakeholders. These teams don’t just collaborate — they share ownership of both the problems and the solutions, ensuring integration efforts align with measurable business goals.

Launi also stressed a value-first mindset: rather than building fully engineered pipelines from the outset, start small. Prototype, test quickly whether a pipeline delivers real value, and scale only if it does. If it doesn’t, move on. It’s a practical approach that supports faster outcomes and reduces wasted effort.

Why it stuck with me: Like so many topics in data, technology alone isn’t the solution — it’s the enabler. What really moves the needle is how people work together. Data integration only delivers real value when it’s owned collaboratively across roles, built iteratively, and tied directly to business outcomes. What resonated even more is how well data virtualization supports exactly this kind of rapid prototyping — enabling teams to validate value early, without heavy lifting or delays. It’s incredibly efficient and a major time-saver, yet the real power of that flexibility is still underestimated by many.

Data products are not just datasets

Ehtisham Zaidi’s session on data products tackled one of the most misused concepts in data today. His core message? Not everything you deliver is a data product — and not every use case deserves one.

A true data product is:

  • Curated, consumption-ready, and reusable
  • Packaged with both technical and business metadata
  • Aligned to specific service level agreements (SLAs) and owned throughout its lifecycle
  • Measurable in terms of business value generated, not just data processed

Zaidi introduced a powerful classification framework: utility, enabler, and driver products. Utility products (like salary slip reports or regulatory compliance feeds) must exist. Enabler products help grow the business (like customer 360 views). Driver products aim to transform it (such as data monetization platforms).

Why it stuck with me: In a market saturated with buzzwords, this session cut through the noise and brought much-needed clarity to what a real data product is. It reinforced that moving data isn’t the goal — delivering value is. That shift in mindset demands ownership, usage metrics, and a clear return on investment (ROI) narrative. Without it, data integration risks becoming little more than operational overhead. The importance of a data marketplace — where curated, governed data products can be easily discovered and reused — resonates strongly with what we hear in conversations with customers and prospects, and it’s exactly why CData Virtuality invested in this capability.

Embedding generative AI in integration platforms: From buzzword to capability

Naturally, AI had its place. One session explored how generative AI (genAI) is transforming the broader data management stack — and in particular, how it’s becoming an embedded, intelligent capability within data integration platforms.

One of the strongest insights: AI-readiness goes beyond data availability — metadata depth is also critical. GenAI thrives on context. That means catalogs, knowledge graphs, and lineage metadata matter more than ever. Integration platforms are increasingly offering semantic enrichment and control plane/execution plane separation, enabling sophisticated hybrid deployments.

We also looked at financial operations (FinOps) as a rising requirement. As pipelines span cloud, on-premises, and multi-cloud environments, tracking cost versus value is becoming mission-critical. Advanced platforms are beginning to offer automated deployment optimization based on usage and business value, not just load.

Why it stuck with me: It’s interesting to see how different experts interpret GenAI’s impact, and in data integration, it’s already transforming how platforms operate. From auto-suggesting pipeline improvements to enforcing data contracts and enhancing observability, GenAI is accelerating a shift toward intelligent, self-optimizing integration. It’s not just a layer on top — it’s becoming part of the core infrastructure.

Final thoughts

Across all three sessions, one underlying shift was impossible to miss: data integration is evolving from a technology concern to a mindset shift. It’s no longer just about building pipelines — it’s about aligning them with business goals, delivering measurable value, and adapting continuously as needs change.

What also stood out is how each perspective, in its own way, touched on the same challenge: how to better involve business users in integration efforts. Whether through fusion teams, product thinking, or intelligent platforms that surface data in more accessible ways, the direction is clear — integration must be collaborative by design, not just in theory.

From mindset to execution, integration strategies must now be outcome-driven, business-inclusive, and intelligent at the core.

CData Virtuality supports this shift

CData Virtuality supports exactly the kind of integration approach these sessions call out. By offering multiple integration styles — including data virtualization, ETL/ELT — in a single platform, teams can flexibly adapt to different use cases and business needs without juggling disparate tools. Rapidly prototype pipelines using virtual data models to enable teams to test business value early and iterate quickly. Once validated, these pipelines can be automated and optimized for production, accelerating time-to-insight and reducing manual overhead. And with a built-in data marketplace, organizations can share, discover, and reuse governed data products — accelerating delivery, increasing transparency, and driving consistency across teams.

Explore CData Virtuality today

Take a free, interactive product tour to experience how CData Virtuality supports multiple integration styles, automation, data product delivery, and AI capabilities in a single platform.

Tour the product