Automated Continuous Databricks Replication to Amazon S3

Ready to get started?

Learn more or sign up for a free trial:

CData Sync



Use CData Sync for automated, continuous, customizable Databricks replication to Amazon S3.

Always-on applications rely on automatic failover capabilities and real-time data access. CData Sync integrates live Databricks data into your Amazon S3 instance, allowing you to consolidate all of your data into a single location for archiving, reporting, analytics, machine learning, artificial intelligence and more.

Configure Amazon S3 as a Replication Destination

Using CData Sync, you can replicate Databricks data to Amazon S3. To add a replication destination, navigate to the Connections tab.

  1. Click Add Connection.
  2. Select Amazon S3 as a destination.
  3. Enter the necessary connection properties. To connect to Amazon S3, provide the credentials for an administrator account or for an IAM user with custom permissions: Set AccessKey to the access key ID. Set SecretKey to the secret access key.

    Note: Though you can connect as the AWS account administrator, it is recommended to use IAM user credentials to access AWS services.

    To obtain the credentials for an IAM user, follow the steps below:

    1. Sign into the IAM console.
    2. In the navigation pane, select Users.
    3. To create or manage the access keys for a user, select the user and then select the Security Credentials tab.

    To obtain the credentials for your AWS root account, follow the steps below:

    1. Sign into the AWS Management console with the credentials for your root account.
    2. Select your account name or number and select My Security Credentials in the menu that is displayed.
    3. Click Continue to Security Credentials and expand the Access Keys section to manage or create root account access keys.

  4. Click Test Connection to ensure that the connection is configured properly.
  5. Click Save Changes.

Configure the Databricks Connection

You can configure a connection to Databricks from the Connections tab. To add a connection to your Databricks account, navigate to the Connections tab.

  1. Click Add Connection.
  2. Select a source (Databricks).
  3. Configure the connection properties.

    To connect to a Databricks cluster, set the properties as described below.

    Personal Access Token

    To authenticate using a Personal Access Token, set the following:

    • AuthScheme: Set this to PersonalAccessToken.
    • Token: The token used to access the Databricks server. It can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab.

    Azure Active Directory

    To authenticate to Databricks using Azure Service Principal:

    • AuthScheme: Set this to AzureServicePrincipal.
    • AzureTenantId: Set this to the tenant ID of your Microsoft Azure Active Directory.
    • AzureClientId: Set to the application (client) ID of your Microsoft Azure Active Directory application.
    • AzureClientSecret: Set to the application (client) secret of your Microsoft Azure Active Directory application.
    • AzureSubscriptionId: Set this to the Subscription Id of your Microsoft Azure Databricks Service Workspace.
    • AzureResourceGroup: Set this to the Resource Group name of your Microsoft Azure Databricks Service Workspace.
    • AzureWorkspace: Set this to the name of your Microsoft Azure Databricks Service Workspace.

    Connecting to Databricks

    To connect to a Databricks cluster, set the properties as described below.

    Note: You can find the required values in your Databricks instance by navigating to Clusters and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

    • Database: Set to the name of the Databricks database.
    • Server: Set to the Server Hostname of your Databricks cluster.
    • HTTPPath: Set to the HTTP Path of your Databricks cluster.
    • Token: Set to your personal access token (you can obtain this value by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).

    Cloud Storage Configuration

    The provider supports DBFS, Azure Blob Storage, and AWS S3 for uploading CSV files.

    DBFS Cloud Storage

    To use DBFS for cloud storage, set the following:

    • CloudStorageType: Set this to DBFS.

    Azure Blob Storage

    Set the following to use Azure Blob Storage for cloud storage:

    • CloudStorageType: Set this to Azure Blob storage.
    • StoreTableInCloud: Set this to True to store tables in cloud storage when creating a new table.
    • AzureStorageAccount: Set this to the name of your Azure storage account.
    • AzureAccessKey: Set to the storage key associated with your Databricks account. Find this via the azure portal (using the root acoount). Select your storage account and click Access Keys to find this value.
    • AzureBlobContainer: Set to the name of you Azure Blob storage container.

    AWS S3

    Set the following to use AWS S3 for cloud storage:

    • CloudStorageType: Set this to AWS S3.
    • StoreTableInCloud: Set this to True to store tables in cloud storage when creating a new table.
    • AWSAccessKey: The AWS account access key. This value is accessible from your AWS security credentials page.
    • AWSSecretKey: Your AWS account secret key. This value is accessible from your AWS security credentials page.
    • AWSS3Bucket: Set to the name of your AWS S3 bucket.
    • AWSRegion: The hosting region for your Amazon Web Services. You can obtain the AWS Region value by navigating to the Buckets List page of your Amazon S3 service, for example, us-east-1.
  4. Click Connect to ensure that the connection is configured properly.
  5. Click Save Changes.

Configure Replication Queries

CData Sync enables you to control replication with a point-and-click interface and with SQL queries. For each replication you wish to configure, navigate to the Jobs tab and click Add Job. Select the Source and Destination for your replication.

Replicate Entire Tables

To replicate an entire table, click Add Tables in the Tables section, choose the table(s) you wish to replicate, and click Add Selected Tables.

Customize Your Replication

You can use the Columns and Query tabs of a task to customize your replication. The Columns tab allows you to specify which columns to replicate, rename the columns at the destination, and even perform operations on the source data before replicating. The Query tab allows you to add filters, grouping, and sorting to the replication.

Schedule Your Replication

In the Schedule section, you can schedule a job to run automatically, configuring the job to run after specified intervals ranging from once every 10 minutes to once every month.

Once you have configured the replication job, click Save Changes. You can configure any number of jobs to manage the replication of your Databricks data to Amazon S3.