Integrate Databricks Data in Pentaho Data Integration



Build ETL pipelines based on Databricks data in the Pentaho Data Integration tool.

The CData JDBC Driver for Databricks enables access to live data from data pipelines. Pentaho Data Integration is an Extraction, Transformation, and Loading (ETL) engine that data, cleanses the data, and stores data using a uniform format that is accessible.This article shows how to connect to Databricks data as a JDBC data source and build jobs and transformations based on Databricks data in Pentaho Data Integration.

About Databricks Data Integration

Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:

  • Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
  • Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
  • Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
  • Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.

While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.

Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.


Getting Started


Configure to Databricks Connectivity

To connect to a Databricks cluster, set the properties as described below.

Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

  • Server: Set to the Server Hostname of your Databricks cluster.
  • HTTPPath: Set to the HTTP Path of your Databricks cluster.
  • Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).

Built-in Connection String Designer

For assistance in constructing the JDBC URL, use the connection string designer built into the Databricks JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

java -jar cdata.jdbc.databricks.jar

Fill in the connection properties and copy the connection string to the clipboard.

When you configure the JDBC URL, you may also want to set the Max Rows connection property. This will limit the number of rows returned, which is especially helpful for improving performance when designing reports and visualizations.

Below is a typical JDBC URL:

jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;

Save your connection string for use in Pentaho Data Integration.

Connect to Databricks from Pentaho DI

Open Pentaho Data Integration and select "Database Connection" to configure a connection to the CData JDBC Driver for Databricks

  1. Click "General"
  2. Set Connection name (e.g. Databricks Connection)
  3. Set Connection type to "Generic database"
  4. Set Access to "Native (JDBC)"
  5. Set Custom connection URL to your Databricks connection string (e.g.
    jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;
  6. Set Custom driver class name to "cdata.jdbc.databricks.DatabricksDriver"
  7. Test the connection and click "OK" to save.

Create a Data Pipeline for Databricks

Once the connection to Databricks is configured using the CData JDBC Driver, you are ready to create a new transformation or job.

  1. Click "File" >> "New" >> "Transformation/job"
  2. Drag a "Table input" object into the workflow panel and select your Databricks connection.
  3. Click "Get SQL select statement" and use the Database Explorer to view the available tables and views.
  4. Select a table and optionally preview the data for verification.

At this point, you can continue your transformation or jb by selecting a suitable destination and adding any transformations to modify, filter, or otherwise alter the data during replication.

Free Trial & More Information

Download a free, 30-day trial of the CData JDBC Driver for Databricks and start working with your live Databricks data in Pentaho Data Integration today.

Ready to get started?

Download a free trial of the Databricks Driver to get started:

 Download Now

Learn more:

Databricks Icon Databricks JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Databricks.