We are proud to share our inclusion in the 2024 Gartner Magic Quadrant for Data Integration Tools. We believe this recognition reflects the differentiated business outcomes CData delivers to our customers.
Get the Report →Access Databricks Data in Mule Applications Using the CData JDBC Driver
Create a simple Mule Application that uses HTTP and SQL with CData JDBC drivers to create a JSON endpoint for Databricks data.
The CData JDBC Driver for Databricks connects Databricks data to Mule applications enabling read , write, update, and delete functionality with familiar SQL queries. The JDBC Driver allows users to easily create Mule applications to backup, transform, report, and analyze Databricks data.
This article demonstrates how to use the CData JDBC Driver for Databricks inside of a Mule project to create a Web interface for Databricks data. The application created allows you to request Databricks data using an HTTP request and have the results returned as JSON. The exact same procedure outlined below can be used with any CData JDBC Driver to create a Web interface for the 200+ available data sources.
About Databricks Data Integration
Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:
- Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
- Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
- Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
- Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.
While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.
Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.
Getting Started
- Create a new Mule Project in Anypoint Studio.
- Add an HTTP Connector to the Message Flow.
- Configure the address for the HTTP Connector.
- Add a Database Select Connector to the same flow, after the HTTP Connector.
- Create a new Connection (or edit an existing one) and configure the properties.
- Set Connection to "Generic Connection"
- Select the CData JDBC Driver JAR file in the Required Libraries section (e.g. cdata.jdbc.databricks.jar).
- Set the URL to the connection string for Databricks
To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Databricks JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.databricks.jar
Fill in the connection properties and copy the connection string to the clipboard.
- Set the Driver class name to cdata.jdbc.databricks.DatabricksDriver.
- Click Test Connection.
- Set the SQL Query Text to a SQL query to request Databricks data. For example:
SELECT City, CompanyName FROM Customers WHERE Country = 'US'
- Add a Transform Message Component to the flow.
- Set the Output script to the following to convert the payload to JSON:
%dw 2.0 output application/json --- payload
- To view your Databricks data, navigate to the address you configured for the HTTP Connector (localhost:8081 by default): http://localhost:8081. The Databricks data is available as JSON in your Web browser and any other tools capable of consuming JSON endpoints.
At this point, you have a simple Web interface for working with Databricks data (as JSON data) in custom apps and a wide variety of BI, reporting, and ETL tools. Download a free, 30 day trial of the JDBC Driver for Databricks and see the CData difference in your Mule Applications today.