Access Databricks Data in Anypoint Using SQL

Ready to get started?

Download for a free trial:

Download Now

Learn more:

Databricks MuleSoft Connector



Create a simple Mule Application that uses HTTP and SQL with the CData Mule Connector for Databricks to create a JSON endpoint for Databricks data.

The CData Mule Connector for Databricks connects Databricks data to Mule applications enabling read , write, update, and delete functionality with familiar SQL queries. The Connector allows users to easily create Mule Applications to backup, transform, report, and analyze Databricks data.

This article demonstrates how to use the CData Mule Connector for Databricks inside of a Mule project to create a Web interface for Databricks data. The application created allows you to request Databricks data using an HTTP request and have the results returned as JSON. The exact same procedure outlined below can be used with any CData Mule Connector to create a Web interface for the 200+ available data sources.

  1. Create a new Mule Project in Anypoint Studio.
  2. Add an HTTP Connector to the Message Flow.
  3. Configure the address for the HTTP Connector.
  4. Add a CData Databricks Connector to the same flow, after the HTTP Connector.
  5. Create a new Connection (or edit an existing one) and configure the properties to connect to Databricks (see below). Once the connection is configured, click Test Connection to ensure the connectivity to Databricks.

    To connect to a Databricks cluster, set the properties as described below.

    Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

    • Server: Set to the Server Hostname of your Databricks cluster.
    • HTTPPath: Set to the HTTP Path of your Databricks cluster.
    • Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
  6. Configure the CData Databricks Connector.
    1. Set the Operation to 'Select with Streaming'.
    2. Set the Query type to Dynamic.
    3. Set the SQL query to SELECT * FROM #[message.inboundProperties.'http.query.params'.get('table')] to parse the URL parameter table and use it as the target of the SELECT query. You can customize the query further by referencing other potential URL parameters.
  7. Add a Transform Message Component to the flow.
    1. Map the Payload from the input to the Map in the output.
    2. Set the Output script to the following to convert the payload to JSON:
      %dw 1.0
      %output application/json
      ---
      payload
              
  8. To view your Databricks data, navigate to the address you configured for the HTTP Connector (localhost:8081 by default) and pass a table name as the table URL parameter: http://localhost:8081?table=Customers
    The Customers data is available as JSON in your Web browser and any other tools capable of consuming JSON endpoints.

At this point, you have a simple Web interface for working with Databricks data (as JSON data) in custom apps and a wide variety of BI, reporting, and ETL tools. Download a free, 30 day trial of the Mule Connector for Databricks and see the CData difference in your Mule Applications today.