CData Sync 26.2 is our most feature-rich release in recent memory. This version delivers Git-based version control for pipeline configurations, Python scripting in Events, UI-configurable parallel reads for high-volume databases, a fresh visual redesign, a guided application database migration wizard, and a wide range of improvements to Change Data Capture. Here is a full look at what is new and why it matters.
Manage pipeline configurations like code
For teams running CData Sync in production, keeping configuration changes under control has always been a manual exercise - export the config, document the change, hope you remember what was different. Sync 26.2 changes that with native Git version control built directly into the product.
Once a workspace is connected to a Git repository, every change to jobs, connections, and pipeline settings becomes a versioned artifact. You can commit changes with a message, pull updates from your team, discard local edits, and undo historical commits, all from within the Sync UI, with no command-line knowledge required.
How it works
Connect a workspace to any Git provider (GitHub, GitLab, Azure DevOps, Bitbucket) using your choice of authentication: OAuth, SSH, HTTP, or local Git. From there, the workflow feels familiar. Changed files appear in a commit panel with a visual diff. You write a message, push to your remote branch, and the change is tracked. To pull in updates from a teammate, browse the available commits by author and message, and apply with a single click.
Need to undo something? Select any commit from the history and restore every affected file to its prior state. Discard local edits just as easily. Each workspace gets its own encryption key to protect sensitive values — created automatically for new repositories or imported when connecting to a branch that already contains Sync settings.
Why it matters
Version control turns pipeline management into an engineering discipline. Teams in regulated environments get a full audit trail of who changed what and when. DevOps-oriented teams can implement promotion workflows, develop in a feature branch, review, merge to production. And anyone who has ever broken a pipeline by editing the wrong field can roll it back in two clicks instead of rebuilding from memory.
Parallel Partitioned Reads
Replicating a table with hundreds of millions of rows from a transactional database is one of the most common performance challenges in data integration. Sync 26.2 introduces Parallel Partitioned Reads as a new capability by letting you split a large source table into partitions that are read simultaneously across multiple threads, dramatically reducing replication time for high-volume workloads.
The setup is straightforward: choose a partition key (any date or integer column), set the partition size and maximum thread count, and Sync handles the rest. Each partition is read independently, and results are written to the destination as they arrive. For a 500M-row table partitioned into five chunks with five threads, you can expect replication time to drop by up to 5x compared to a single-threaded read.
Supported sources
Parallel Partitioned Reads is available for SQL Server, Oracle, PostgreSQL, DB2, DB2 i, Informix, MySQL, and MariaDB. Configuration is per task, so you can enable it selectively on the tables that need it without affecting the rest of a job.
ClickHouse destination
CData Sync 26.2 adds ClickHouse as a replication destination. ClickHouse is a columnar, open-source OLAP database built for high-speed analytical queries on large datasets, and it's become a go-to choice for teams building real-time analytics, observability pipelines, and event-driven data platforms.
With this addition, you can replicate data from any Sync source directly into ClickHouse using Sync's standard replication modes including full load, incremental, and Change Data Capture. ClickHouse's columnar storage and compression make it exceptionally fast for aggregation queries, and pairing that with Sync's reliable, automated replication means you can keep your ClickHouse tables continuously fresh from transactional sources.
What this enables
Real-time analytics pipelines: replicate from SQL Server, Oracle, Salesforce, or any other Sync source into ClickHouse for sub-second query performance on operational data.
Observability and event data: stream application events or log data through Sync into ClickHouse for high-throughput ingestion and fast analytical access.
Cost-effective data warehousing: ClickHouse's compression ratios and columnar format significantly reduce storage footprint compared to row-based alternatives.
Python is now a first-class citizen in Sync Events
Sync Events let you run custom logic before or after a job or task such as notifications, validations, downstream triggers. Until now that logic had to be written in JavaScript or shell script. In Sync 26.2, Python joins as a fully supported option.
For most data engineering teams, this is the natural choice. Python is the dominant language in the data ecosystem, and removing the requirement to context-switch into JavaScript opens event automation to a much broader group of engineers.
What you can do with Python Events
Call a downstream REST API after replication completes to trigger a dbt run, an Airflow DAG, or any other process that depends on fresh data.
Run a data quality check before replication starts such as validate row counts, check for nulls, or compare checksums and block the job if the source is not ready.
Send a structured notification to Slack or Teams with job metrics: row counts, runtime, and any warnings surfaced during execution.
Write enriched job metadata to an external observability platform like Datadog, Splunk, or OpenTelemetry.
A refreshed UI and smarter workflows
Every major page in CData Sync has been updated with our new branding — Sign In, Dashboard, Pipelines, Jobs, Connections, Transformations, Logs, Settings, and User Details. The refresh is cleaner and more consistent, and it sets the foundation for continued UI investment in future releases.
Alongside the visual update, several workflows that customers found confusing or time-consuming have been rethought.
Re-sync tasks
You can now re-sync one or more tasks directly from the job view without touching job settings. Choose where to start (from the beginning, from a specific date or value, or from now) and choose what to do with existing destination data: merge, truncate, or drop.
Smarter table search
When filtering tables in the Add Tasks modal, schemas that contain no matching results are now automatically hidden. For sources with dozens or hundreds of schemas which is common in enterprise Oracle and SQL Server environments, this makes finding the right table dramatically faster.
Simplified proxy configuration
Connections now inherit the global Sync proxy settings by default. A simple toggle on each connection lets you override if needed, but for the vast majority of deployments, proxy configuration is now a one-time global setting rather than something you have to repeat on every connection.
Migrate your application database without the manual steps
CData Sync stores its own application data such as jobs, connections, history, settings in a backend database. The default for new installations is H2, which is suitable for getting started but is not recommended for production at scale. Migrating to SQL Server, MySQL, or PostgreSQL used to require documentation, manual file editing, and a certain amount of nerve.
Sync 26.2 introduces a guided migration wizard that handles the entire process through the UI. Choose your target database, enter connection details, and run a connection test. The wizard validates that no jobs are running and no CDC engines are active, then migrates your data table by table with a real-time progress display. On completion, the new connection string is encrypted for you, displayed with a one-click copy button, and step-by-step switchover instructions are provided inline.
The original Derby/H2 database is preserved throughout, the wizard is fully non-destructive, and if anything goes wrong, an automatic rollback restores the pre-migration state. You can also run it multiple times, which is useful for testing different target databases before committing.
Alongside the wizard, connection pooling is now enabled by default for CData AppDB connections. This is particularly important for teams using PostgreSQL as their AppDB, where the absence of pooling was causing meaningful throughput degradation.
Change Data Capture: deeper Oracle support and better tooling
Change Data Capture received focused investment in 26.2, particularly for Oracle and DB2 i, along with several improvements to the CDC engine management experience.
Oracle — ROWID, NUMBER types, and Temp Tables
Oracle CDC jobs can now replicate the ROWID column, and ROWID can serve as a primary key substitute for tables that have no defined primary key. This unblocks a common scenario in legacy Oracle schemas where tables were built without primary keys and replication was not previously possible without schema changes.
Oracle Native CDC also gains improved handling for NUMBER datatypes, with better precision and scale defaults that reduce type-mismatch errors when replicating numeric data to non-Oracle destinations.
On the destination side, Oracle replication jobs now use Private Temporary Tables (on Oracle 18c and above) or Global Temporary Tables on older versions when a merge-based replication strategy is in use. Temporary tables do not write to Oracle redo logs, which means significantly less write amplification for customers whose Oracle infrastructure was being stressed by replication activity.
DB2 i — Journal selection at job creation
Creating a DB2 i CDC job now requires selecting a Journal and Journal Receiver before you can proceed to table selection. This is a deliberate guardrail: the table list is filtered to only show tables associated with the selected journal, preventing the class of configuration errors where jobs were set up with incompatible tables and only failed at runtime. Existing DB2 i CDC jobs are fully unaffected.
CDC Engine improvements
CDC engine settings now have their own dedicated panel, clearly separated from base job settings. Certain properties that should not change while the engine is running are rendered read-only, reducing the risk of inadvertent changes.
The CDC Engine can now be reset directly from the UI. Resetting clears the stage folder, stage tables, and the offset file, giving the engine a clean restart state when it has stalled or fallen out of sync.
CData Sync 26.2 is available now. For upgrade instructions and full documentation, visit the CData Sync documentation portal or speak with your account representative.
Replicate faster. Integrate smarter.
Whether you're syncing to a data warehouse, a cloud app, or a local database, CData Sync keeps your data flowing in real time — with the reliability your business depends on.
Get the trial