Ready to get started?

Download a free trial of the Databricks Driver to get started:

 Download Now

Learn more:

Databricks Icon Databricks JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Databricks.

ETL Databricks in Oracle Data Integrator



This article shows how to transfer Databricks data into a data warehouse using Oracle Data Integrator.

Leverage existing skills by using the JDBC standard to read and write to Databricks: Through drop-in integration into ETL tools like Oracle Data Integrator (ODI), the CData JDBC Driver for Databricks connects real-time Databricks data to your data warehouse, business intelligence, and Big Data technologies.

JDBC connectivity enables you to work with Databricks just as you would any other database in ODI. As with an RDBMS, you can use the driver to connect directly to the Databricks APIs in real time instead of working with flat files.

This article walks through a JDBC-based ETL -- Databricks to Oracle. After reverse engineering a data model of Databricks entities, you will create a mapping and select a data loading strategy -- since the driver supports SQL-92, this last step can easily be accomplished by selecting the built-in SQL to SQL Loading Knowledge Module.

Install the Driver

To install the driver, copy the driver JAR and .lic file, located in the installation folder, into the ODI appropriate directory:

  • UNIX/Linux without Agent: ~/.odi/oracledi/userlib
  • UNIX/Linux with Agent: $ODI_HOME/odi/agent/lib
  • Windows without Agent: %APPDATA%\Roaming\odi\oracledi\userlib
  • Windows with Agent: %APPDATA%\Roaming\odi\agent\lib

Restart ODI to complete the installation.

Reverse Engineer a Model

Reverse engineering the model retrieves metadata about the driver's relational view of Databricks data. After reverse engineering, you can query real-time Databricks data and create mappings based on Databricks tables.

  1. In ODI, connect to your repository and click New -> Model and Topology Objects.
  2. On the Model screen of the resulting dialog, enter the following information:
    • Name: Enter Databricks.
    • Technology: Select Generic SQL (for ODI Version 12.2+, select Microsoft SQL Server).
    • Logical Schema: Enter Databricks.
    • Context: Select Global.
  3. On the Data Server screen of the resulting dialog, enter the following information:
    • Name: Enter Databricks.
    • Driver List: Select Oracle JDBC Driver.
    • Driver: Enter cdata.jdbc.databricks.DatabricksDriver
    • URL: Enter the JDBC URL containing the connection string.

      To connect to a Databricks cluster, set the properties as described below.

      Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

      • Server: Set to the Server Hostname of your Databricks cluster.
      • HTTPPath: Set to the HTTP Path of your Databricks cluster.
      • Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).

      Built-in Connection String Designer

      For assistance in constructing the JDBC URL, use the connection string designer built into the Databricks JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

      java -jar cdata.jdbc.databricks.jar

      Fill in the connection properties and copy the connection string to the clipboard.

      Below is a typical connection string:

      jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;
  4. On the Physical Schema screen, enter the following information:
    • Name: Select from the Drop Down menu.
    • Database (Catalog): Enter CData.
    • Owner (Schema): If you select a Schema for Databricks, enter the Schema selected, otherwise enter Databricks.
    • Database (Work Catalog): Enter CData.
    • Owner (Work Schema): If you select a Schema for Databricks, enter the Schema selected, otherwise enter Databricks.
  5. In the opened model click Reverse Engineer to retrieve the metadata for Databricks tables.

Edit and Save Databricks Data

After reverse engineering you can now work with Databricks data in ODI. To edit and save Databricks data, expand the Models accordion in the Designer navigator, right-click a table, and click Data. Click Refresh to pick up any changes to the data. Click Save Changes when you are finished making changes.

Create an ETL Project

Follow the steps below to create an ETL from Databricks. You will load Customers entities into the sample data warehouse included in the ODI Getting Started VM.

  1. Open SQL Developer and connect to your Oracle database. Right-click the node for your database in the Connections pane and click new SQL Worksheet.

    Alternatively you can use SQLPlus. From a command prompt enter the following:

    sqlplus / as sysdba
  2. Enter the following query to create a new target table in the sample data warehouse, which is in the ODI_DEMO schema. The following query defines a few columns that match the Customers table in Databricks: CREATE TABLE ODI_DEMO.TRG_CUSTOMERS (COMPANYNAME NUMBER(20,0),City VARCHAR2(255));
  3. In ODI expand the Models accordion in the Designer navigator and double-click the Sales Administration node in the ODI_DEMO folder. The model is opened in the Model Editor.
  4. Click Reverse Engineer. The TRG_CUSTOMERS table is added to the model.
  5. Right-click the Mappings node in your project and click New Mapping. Enter a name for the mapping and clear the Create Empty Dataset option. The Mapping Editor is displayed.
  6. Drag the TRG_CUSTOMERS table from the Sales Administration model onto the mapping.
  7. Drag the Customers table from the Databricks model onto the mapping.
  8. Click the source connector point and drag to the target connector point. The Attribute Matching dialog is displayed. For this example, use the default options. The target expressions are then displayed in the properties for the target columns.
  9. Open the Physical tab of the Mapping Editor and click CUSTOMERS_AP in TARGET_GROUP.
  10. In the CUSTOMERS_AP properties, select LKM SQL to SQL (Built-In) on the Loading Knowledge Module tab.

You can then run the mapping to load Databricks data into Oracle.