The enterprise integration landscape is at an inflection point. For years, Integration Platform as a Service (iPaaS) has been the backbone of how companies connect their systems—and for good reason. But as AI agents become a serious part of the enterprise stack, a new question is emerging: Is the architecture we built for human-driven workflows the right foundation for AI-driven ones?
The honest answer is that no one has fully figured this out yet. But there are strong reasons to believe that AI's needs are fundamentally different from the ones iPaaS was designed to serve—and that a concept we call universal connectivity may be better suited to the way agents actually reason.
iPaaS: The right tool for what it was built to do
Let's be clear about what iPaaS does well. It was designed to solve two persistent headaches in enterprise IT: taming the chaos of system integration and simplifying complex programming. It wraps intricate business functions (like "Create a New Customer") in clean, easy-to-use interfaces, handling authentication, data mapping, and resilient workflows behind the scenes. The result is repeatable, consistent, and deterministic. For the problems it was built to solve, iPaaS remains a powerful and important framework.
But AI agents aren't just running pre-defined workflows. They're reasoning, exploring, and synthesizing information across systems in ways that are difficult to anticipate at design time. That's a meaningfully different job, and it's worth asking whether it calls for a meaningfully different architecture.
The tension: Structured tools vs. open exploration
The prevailing assumption is that the path to AI-powered applications runs through more connectors, more workflows, and more iPaaS-exposed tools. Build enough of them, the thinking goes, and your AI will have everything it needs.
There's logic to that view, but there's also a tension worth examining. Every new iPaaS endpoint ("Get Accounts," "Get Opportunities," "Search Problem Tickets") is a curated, human-defined path. Each one reflects a developer's assumptions about what questions will be asked and what data will be needed. For deterministic automation, that's a feature. For a reasoning AI agent trying to synthesize a novel answer, it can become a constraint.
Think of it this way: if you were onboarding a sharp new data analyst to assess the health of a key account, would you hand them a set of pre-built reports? Or would you grant them governed access to the underlying data and allow them to explore? Most leaders would choose the latter, not because the reports are bad, but because the analyst's value comes from asking questions no one anticipated.
The same principle applies to AI. An agent that asks, "What's the health of this account?" first needs a foundation—a business definition of what "healthy" looks like, encoded through semantic context or prompt guidance. But once that foundation is in place, the agent shouldn't be limited to the metrics someone pre-selected. It should be able to discover that Salesforce opportunities are losing momentum, correlate this with a spike in ServiceNow tickets, cross-reference payment patterns in NetSuite, and check product usage in an internal database—all in real time, following its own reasoning. The value lies not in the AI's ability to magically know what matters. It's that, given the right orientation, it can explore far more broadly and rapidly than any pre-built workflow would allow.
But exploration is only half the requirement. AI agents don't just read data. They update records, trigger workflows, and write results back to source systems. Those write operations need to be deterministic and reliable every time. Effective AI tooling represents a contract between non-deterministic reasoning and deterministic execution. The architecture that serves AI agents must support both universal read access for exploration and analysis and deterministic write-back for action. iPaaS solves the second problem well. The question is whether it can also solve the first without requiring a developer to anticipate every query in advance.
Universal connectivity: A different mental model
This is the idea behind universal connectivity. Rather than channeling AI through a fixed set of prebuilt tools, it gives agents a standardized relational interface, via MCP and SQL, over live data from APIs and systems, rather than from databases. This lets agents explore, understand, and act on data directly. Three principles distinguish it from the traditional iPaaS model:
Dynamic schema discovery. Instead of workflows built on static assumptions about a system's structure, universal connectivity enables AI agents to explore the structure of data sources in real time. When source systems change, the agent adapts, rather than breaking.
Consistent semantic context. Business terms are translated once and made consistent across all sources. Rather than requiring humans to manually map semantics for each tool, the AI uses a clean, shared data language from the outset.
Structured execution, open exploration. Instead of dozens of hand-built interfaces that must be mapped and maintained individually, the AI uses a standardized interface to describe what it needs. A deterministic engine interprets that request—executing filters, joins, and transformations at the source, not in the model. It's how a skilled analyst would query systems directly instead of stitching together exports in a spreadsheet. When the agent needs to act on its findings, the same platform supports native, bidirectional write-back. Exploration and execution are unified within a single architecture.
To be clear, joining data across systems into a unified view isn't new—iPaaS platforms do this well. The difference is what happens next. In universal connectivity, agents combine full data exploration with what we call derived views—saved SQL queries that act as semantic guideposts. A derived view might present customer, order, and shipping data as a unified table with clear business terms like "Complete_Order_Details." Rather than being the endpoint—a static report the agent returns—derived views are a starting point. The agent uses them to orient itself, then follows its own reasoning deeper into the underlying sources. It's the difference between handing someone an answer and giving them a map.
Flexibility isn't the absence of guardrails
It's worth addressing an obvious concern: giving AI agents broad data access doesn't mean giving them free rein. Universal connectivity isn't a philosophy of "open everything and hope for the best." Agents still need boundaries: governance policies, access controls, rate limits, and clear rules about what they can read versus what they can modify.
The difference is in how those guardrails are applied. In a traditional iPaaS model, constraints are embedded in the design of each workflow. The guardrails and the logic are inseparable, which means every new use case requires building new constrained paths from scratch. In a universal connectivity model, guardrails are applied at the platform level, governing which data sources an agent can access, which operations it's permitted to perform, and the audit trail it leaves. The reasoning within those boundaries remains flexible.
Think of it as the difference between building a separate fenced path for every possible destination versus establishing a well-governed territory where agents can move freely. You're still defining limits. You're just doing it in a way that doesn't require you to anticipate every question the AI might ask.
A practical comparison
Neither approach is universally "right." The best choice depends on the use case, and the differences are worth understanding.
Dimension | iPaaS | Universal connectivity |
Core design model | Manually defined workflows with triggers and actions | Dynamic AI interactions with live data access |
Strongest use cases | Repetitive automation, form-based flows, ETL syncs | Embedded copilots, agents, and broad-context AI reasoning |
Data access pattern | Pre-wired flows with schema mapping and batch processing | Live, query-time access to any connected source |
When schemas change | Flows often require manual updates | Drivers handle schema evolution and versioning |
Adding new agents | Typically requires building new flows | New agents can leverage existing connections without rebuilding |
Deterministic write-back | Core strength: pre-defined actions are reliable and repeatable | Native bidirectional write-back executed after open-ended exploration |
What this means in practice
For organizations exploring AI agents, universal connectivity offers several tangible advantages—not as a replacement for iPaaS in every scenario, but as a purpose-built layer for AI workloads:
Faster time-to-value. When agents can explore data directly, you're not waiting on manual tool design and workflow construction to ship AI features.
Less accumulated complexity. Fewer hard-coded, static workflows to maintain means more leverage from AI and less tech debt over time. As one CTO put it: "The real value of universal connectivity is that you become future proof. It's native to how AI works, and easier to maintain, audit, and debug."
Auditability. Users and admins get a full view of what the AI did and how it reasoned—a governance requirement that enterprise IT increasingly demands.
Predictable economics. Where iPaaS pricing is often task-based, universal connectivity centers on the connection itself. As one head of development described it: "Universal connectivity pricing puts the focus on the connection, so we have a clear proxy for value. It's predictable, based on how AI works, so I can go to my CFO with a clear structure at the beginning of the year."
The question worth asking
The AI integration layer is still being defined. The industry is early in understanding what agents truly need, and the answer will almost certainly involve a mix of approaches. But the question every organization should be asking isn't just whether our integration platform can build workflows. It should ask: Does our integration architecture enable AI to perform the kind of open-ended reasoning that makes it valuable in the first place? Also, do we have the deterministic write-back we need for execution?
If the answer is no—if your agents are constrained to static paths, unable to explore beyond what a developer anticipated—it's worth considering whether universal connectivity belongs in your stack. Not instead of everything you've built, but alongside it, for the workloads where AI needs room to think.
Learn more about CData Connect AI Embed and its universal connectivity model.