Ready to get started?

Learn more about CData Connect Cloud or sign up for free trial access:

Free Trial

Import Databricks Data Using Azure Data Factory



Use CData Connect Cloud to connect to Databricks Data from Azure Data Factory and import live Databricks data.

Microsoft Azure Data Factory (ADF)) is a completely managed, serverless data integration service. When combined with CData Connect Cloud, ADF enables immediate cloud-to-cloud access to Databricks data within data flows. This article outlines the process of connecting to Databricks through Connect Cloud and accessing Databricks data within ADF.

CData Connect Cloud offers a cloud-to-cloud interface tailored for Databricks, granting you the ability to access live data from Databricks data within Azure Data Factory without the need for data replication to a natively supported database. Equipped with optimized data processing capabilities by default, CData Connect Cloud seamlessly channels all supported SQL operations, including filters and JOINs, directly to Databricks. This harnesses server-side processing to expedite the retrieval of the desired Databricks data.

Configure Databricks Connectivity for ADF

Connectivity to Databricks from Azure Data Factory is made possible through CData Connect Cloud. To work with Databricks data from Azure Data Factory, we start by creating and configuring a Databricks connection.

CData Connect Cloud uses a straightforward, point-and-click interface to connect to data sources.

  1. Log into Connect Cloud, click Connections and click Add Connection
  2. Select "Databricks" from the Add Connection panel
  3. Enter the necessary authentication properties to connect to Databricks.

    To connect to a Databricks cluster, set the properties as described below.

    Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

    • Server: Set to the Server Hostname of your Databricks cluster.
    • HTTPPath: Set to the HTTP Path of your Databricks cluster.
    • Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
  4. Click Create & Test
  5. Navigate to the Permissions tab in the Add Databricks Connection page and update the User-based permissions.

Add a Personal Access Token

If you are connecting from a service, application, platform, or framework that does not support OAuth authentication, you can create a Personal Access Token (PAT) to use for authentication. Best practices would dictate that you create a separate PAT for each service, to maintain granularity of access.

  1. Click on your username at the top right of the Connect Cloud app and click User Profile.
  2. On the User Profile page, scroll down to the Personal Access Tokens section and click Create PAT.
  3. Give your PAT a name and click Create.
  4. The personal access token is only visible at creation, so be sure to copy it and store it securely for future use.

With the connection configured, you are ready to connect to Databricks data from Azure Data Factory.

Access Live Databricks Data in Azure Data Factory

To establish a connection from Azure Data Factory to the CData Connect Cloud Virtual SQL Server API, follow these steps.

  1. Login to Azure Data Factory.
  2. If you have not yet created a Data Factory, Click New -> Dataset.
  3. In the search bar, enter SQL Server and select it when it appears. On the following screen, enter a name for the server. In the Linked service field, select New.
  4. Enter the connection settings.
    • Name - enter a name of your choice.
    • Server name - enter the Virtual SQL Server endpoint and port separated by a comma: tds.cdata.com,14333
    • Database name - enter the Connection Name of the CData Connect Cloud data source you want to connect to (for example, Databricks1).
    • User Name - enter your CData Connect Cloud username. This is displayed in the top-right corner of the CData Connect Cloud interface. For example, test@cdata.com.
    • Password - select Password (not Azure Key Vault) and enter the PAT you generated on the Settings page.
    • Click Create.
  5. In Set properties, set the Name, choose the Linked service we just created, select a Table name from those available, and Import schema from connection/store. Click OK.
  6. After creating the linked service, the following screen should appear:
  7. Click preview data to see the imported Databricks table.
  8. You can now use this dataset when creating data flows in Azure Data Factory.

Get CData Connect Cloud

To get live data access to 100+ SaaS, Big Data, and NoSQL sources directly from your cloud applications, try CData Connect Cloud today!