We are proud to share our inclusion in the 2024 Gartner Magic Quadrant for Data Integration Tools. We believe this recognition reflects the differentiated business outcomes CData delivers to our customers.
Get the Report →Automated Continuous Databricks Replication to Apache Cassandra
Use CData Sync for automated, continuous, customizable Databricks replication to Apache Cassandra.
Always-on applications rely on automatic failover capabilities and real-time data access. CData Sync integrates live Databricks data into your Apache Cassandra instance, allowing you to consolidate all of your data into a single location for archiving, reporting, analytics, machine learning, artificial intelligence and more.
About Databricks Data Integration
Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:
- Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
- Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
- Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
- Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.
While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.
Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.
Getting Started
Configure Cassandra as a Replication Destination
Using CData Sync, you can replicate Databricks data to Apache Cassandra. To add a replication destination, navigate to the Connections tab.
- Click Add Connection.
- Select Apache Cassandra as a destination.
Enter the necessary connection properties. CData Sync supports Basic authentication with login credentials and the additional authentication features of DataStax Enterprise (DSE) Cassandra. The following sections detail connection properties your authentication method may require.
You need to set AuthScheme to the value corresponding to the authenticator configured for your system. You specify the authenticator in the authenticator property in the cassandra.yaml file. This file is typically found in /etc/dse/cassandra or through the DSE Unified Authenticator on DSE Cassandra.
Basic Authentication
Basic authentication is supported through Cassandra's built-in default PasswordAuthenticator.
- Set the AuthScheme property to 'BASIC' and set the User and Password properties.
- In the cassandra.yaml file, set the authenticator property to 'PasswordAuthenticator.'
Kerberos Authentication
Kerberos authentication is supported through DataStax Enterprise Unified Authentication.
- Set the AuthScheme property to 'KERBEROS' and set the User and Password properties.
- Set the KerberosKDC, KerberosRealm, and KerberosSPN properties.
- In the cassandra.yaml file, set the authenticator property to "com.datastax.bdp.cassandra.auth.DseAuthenticator."
- Modify the authentication_options section in the dse.yaml file, specifying the default_schema and other_schemas properties as 'kerberos.'
- Modify the kerberos_options section in the dse.yaml file, specifying the keytab, service_principle, http_principle and qop properties.
LDAP Authentication
LDAP authentication is supported through DataStax Enterprise Unified Authentication.
- Set the AuthScheme property to 'LDAP' and set the User and Password properties.
- In the cassandra.yaml file, set the authenticator property to "com.datastax.bdp.cassandra.auth.DseAuthenticator."
- Modify the authentication_options section in the dse.yaml file, specifying the default_schema and other_schemas properties as 'ldap.'
- Modify the ldap_options section in the dse.yaml file, specifying the server_host, server_port, search_dn, search_password, user_search_base, and user_search_filter properties.
Using PKI
You can specify a client certificate to authenticate CData Sync with SSLClientCert, SSLClientCertType, SSLClientCertSubject, and SSLClientCertPassword.
- Click Test Connection to ensure that the connection is configured properly.
- Click Save Changes.
Configure the Databricks Connection
You can configure a connection to Databricks from the Connections tab. To add a connection to your Databricks account, navigate to the Connections tab.
- Click Add Connection.
- Select a source (Databricks).
- Configure the connection properties.
To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
- Click Connect to ensure that the connection is configured properly.
- Click Save Changes.
Configure Replication Queries
CData Sync enables you to control replication with a point-and-click interface and with SQL queries. For each replication you wish to configure, navigate to the Jobs tab and click Add Job. Select the Source and Destination for your replication.
Replicate Entire Tables
To replicate an entire table, click Add Tables in the Tables section, choose the table(s) you wish to replicate, and click Add Selected Tables.
Customize Your Replication
You can use the Columns and Query tabs of a task to customize your replication. The Columns tab allows you to specify which columns to replicate, rename the columns at the destination, and even perform operations on the source data before replicating. The Query tab allows you to add filters, grouping, and sorting to the replication.Schedule Your Replication
In the Schedule section, you can schedule a job to run automatically, configuring the job to run after specified intervals ranging from once every 10 minutes to once every month.
Once you have configured the replication job, click Save Changes. You can configure any number of jobs to manage the replication of your Databricks data to Apache Cassandra.