Load Databricks Data to a Database Using Embulk



Use CData JDBC drivers with the open source ETL/ELT tool Embulk to load Databricks data to a database.

Embulk is an open source bulk data loader. When paired with the CData JDBC Driver for Databricks, Embulk easily loads data from Databricks to any supported destination. In this article, we explain how to use the CData JDBC Driver for Databricks in Embulk to load Databricks data to a MySQL dtabase.

With built-in optimized data processing, the CData JDBC Driver offers unmatched performance for interacting with live Databricks data. When you issue complex SQL queries to Databricks, the driver pushes supported SQL operations, like filters and aggregations, directly to Databricks and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations).

About Databricks Data Integration

Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:

  • Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
  • Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
  • Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
  • Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.

While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.

Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.


Getting Started


Configure a JDBC Connection to Databricks Data

Before creating a bulk load job in Embulk, note the installation location for the JAR file for the JDBC Driver (typically C:\Program Files\CData\CData JDBC Driver for Databricks\lib).

Embulk supports JDBC connectivity, so you can easily connect to Databricks and execute SQL queries. Before creating a bulk load job, create a JDBC URL for authenticating with Databricks.

To connect to a Databricks cluster, set the properties as described below.

Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

  • Server: Set to the Server Hostname of your Databricks cluster.
  • HTTPPath: Set to the HTTP Path of your Databricks cluster.
  • Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).

Built-in Connection String Designer

For assistance in constructing the JDBC URL, use the connection string designer built into the Databricks JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

java -jar cdata.jdbc.databricks.jar

Fill in the connection properties and copy the connection string to the clipboard.

Below is a typical JDBC connection string for Databricks:

jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;

Load Databricks Data in Embulk

After installing the CData JDBC Driver and creating a JDBC connection string, install the required Embulk plugins.

Install Embulk Input & Output Plugins

  1. Install the JDBC Input Plugin in Embulk.
    https://github.com/embulk/embulk-input-jdbc/tree/master/embulk-input-jdbc
  2. embulk gem install embulk-input-jdbc
  3. In this article, we use MySQL as the destination database. You can also choose SQL Server, PostgreSQL, or Google BigQuery as the destination using the output Plugins.
    https://github.com/embulk/embulk-output-jdbc/tree/master/embulk-output-mysql embulk gem install embulk-output-mysql

With the input and output plugins installed, we are ready to load Databricks data into MySQL using Embulk.

Create a Job to Load Databricks Data

Start by creating a config file in Embulk, using a name like databricks-mysql.yml.

  1. For the input plugin options, use the CData JDBC Driver for Databricks, including the path to the driver JAR file, the driver class (e.g. cdata.jdbc.databricks.DatabricksDriver), and the JDBC URL from above
  2. For the output plugin options, use the values and credentials for the MySQL database

Sample Config File (databricks-mysql.yml)

in: type: jdbc driver_path: C:\Program Files\CData[product_name] 20xx\lib\cdata.jdbc.databricks.jar driver_class: cdata.jdbc.databricks.DatabricksDriver url: jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword; table: "Customers" out: type: mysql host: localhost database: DatabaseName user: UserId password: UserPassword table: "Customers" mode: insert

After creating the file, run the Embulk job.

embulk run databricks-mysql.yml

After running the the Embulk job, find the Salesforce data in the MySQL table.

Load Filtered Databricks Data

In addition to loading data directly from a table, you can use a custom SQL query to have more granular control of the data loaded. You can also perform increment loads by setting a last updated column in a SQL WHERE clause in the query field.

in: type: jdbc driver_path: C:\Program Files\CData[product_name] 20xx\lib\cdata.jdbc.databricks.jar driver_class: cdata.jdbc.databricks.DatabricksDriver url: jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword; query: "SELECT City, CompanyName FROM Customers WHERE [RecordId] = 1" out: type: mysql host: localhost database: DatabaseName user: UserId password: UserPassword table: "Customers" mode: insert

More Information & Free Trial

By using CData JDBC Driver for Databricks as a connector, Embulk can integrate Databricks data into your data load jobs. And with drivers for more than 200+ other enterprise sources, you can integrate any enterprise SaaS, big data, or NoSQL source as well. Download a 30-day free trial and get started today.

Ready to get started?

Download a free trial of the Databricks Driver to get started:

 Download Now

Learn more:

Databricks Icon Databricks JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Databricks.