Build Apps with Live Databricks Data Using the Low-Code Development Platform of Mendix



Connect Databricks data with Mendix to build apps using the CData JDBC Driver for Databricks.

Mendix, developed by Siemens, is a low-code platform used to rapidly develop, test, and deploy web and mobile applications, facilitating digital transformation and enhancing business agility. When paired with the CData JDBC Driver for Databricks, you can use your Databricks data to create various applications using Mendix Studio Pro.

With built-in optimized data processing, the CData JDBC driver offers unmatched performance for interacting with live Databricks data. When you issue complex SQL queries to Databricks, the driver pushes supported SQL operations, like filters and aggregations, directly to Databricks and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Its built-in dynamic metadata querying allows you to work with and analyze Databricks data using native data types.

This article shows how you can easily create an application that utilizes Databricks data in Mendix by combining the JDBC interface provided by Mendix with the CData JDBC Driver for Databricks.

About Databricks Data Integration

Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:

  • Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
  • Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
  • Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
  • Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.

While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.

Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.


Getting Started


Preparing the Mendix environment

In this section, we will explore how to develop an app using Mendix Studio Pro, as previously introduced, with Databricks data. Be sure to install Mendix Studio Pro beforehand.

Install the CData JDBC Driver for Databricks

First, install the CData JDBC Driver for Databricks on the same machine as Mendix. The JDBC Driver will be installed in the following path:

C:\Program Files\CData\CData JDBC Driver for Databricks 20xx\lib\cdata.jdbc.databricks.jar

Create an application

Now let's start creating the app. First, let's make an app that has the Database Connector available.

  1. Launch Mendix Studio Pro and click 'Create New App.'
  2. Select the 'Blank Web App' option.
  3. Click 'Use this starting point' to proceed.
  4. Create an app with a name of your choice. Also, note down the "Disk location" information, for future reference.
  5. You have now created a brand-new app.

Add the Database Connector to your application

Next, add the Database Connector module to the app you just created.

  1. On the top right, click on the Marketplace button.
  2. Search for Database Connector in the Marketplace search section and select it.
  3. Click on Download to download the latest Database Connector.
  4. In the Import Module window, select the Action as Add as a new module.
  5. If the Database Connector appears on the app screen, you are good to move on to the next steps.

Adding the JDBC Driver to Mendix Studio Pro

To use the CData JDBC driver with this Database Connector, you must add the JDBC Driver JAR file to your project.

  1. In the Mendix project folder you noted earlier, there is a folder named 'userlib.' Place the two files, 'cdata.jdbc.databricks.jar' and 'cdata.jdbc.databricks.lic,' into that folder.
  2. You can now use the CData JDBC Driver with the Database Connector.

Create a Data Model

Now, let's create an app. We first need to define a data model to load data from the Database Connector and display it on the list screen. Let's create the data model before loading the data.

  1. Add an Entity to the 'Domain model' of MyFirstModule.
  2. Enter the entity name and field definitions.
  3. You can easily configure the data by checking the table definition information through the CData JDBC driver using a tool such as DBeaver.
  4. Define the entities.

Create a constant for the JDBC URL

Next, create a JDBC URL constant to use with the Database Connector.

  1. Add 'Constant' to MyFirstModule.
  2. Add a name to the Constant in the Add Constant window.
  3. Generate a JDBC URL for connecting to Databricks, beginning with jdbc:databricks: followed by a series of semicolon-separated connection string properties.

    To connect to a Databricks cluster, set the properties as described below.

    Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.

    • Server: Set to the Server Hostname of your Databricks cluster.
    • HTTPPath: Set to the HTTP Path of your Databricks cluster.
    • Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).

    Built-in Connection String Designer

    For assistance in constructing the JDBC URL, use the connection string designer built into the Databricks JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

    java -jar cdata.jdbc.databricks.jar

    Fill in the connection properties and copy the connection string to the clipboard.

    A typical JDBC URL is below:

    jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;
  4. Specify the connection string copied from the previous step in the Default value section and click on OK.

Create a microflow to retrieve Databricks data

Let's create a microflow that retrieves data from the Database Connector based on the entity we created.

  1. Click 'Add microflow' from MyFirstModule.
  2. Create a microflow with any name.
  3. First, create an object for the entity you defined earlier. Then, add the 'Create Object' action to the microflow.
  4. Click on the 'Select' button for Entity in the Create Object window.
  5. Select a previously defined Entity.
  6. Enter an arbitrary Object name and click OK.
  7. Next, add an Execute Query action to the microflow to retrieve data from the Database Connector.
  8. Define each input in the Execute Query window.
  9. In "jdbc url", specify the constant you defined beforehand.
  10. In SQL, write a query to retrieve data from Databricks.
  11. You don't need a Username or Password this time, so set them to 'empty' and assign the object created in the previous flow as the Result object. Then, simply specify any name you prefer for the List in the List Name section.
  12. Finally, define the output of the microflow.
  13. Double-click the End Event to open it, select 'List' from the Type dropdown, and link it to the Entity you defined earlier. Then, set the output result of Execute Query as the Return value.
  14. This completes the microflow that retrieves data from Databricks.

Create a list screen and link it to a microflow

Finally, let's create a screen that displays the results obtained from the microflow.

  1. Double-click 'Home_web' inside the Toolbox menu to open it.
  2. Drag and drop a Data grid template from the Data containers section into the list screen.
  3. Once you have placed the Data grid, double-click on it to display the Edit Data Grid settings screen.
  4. Navigate to the Data source tab and link the data source type with the Microflow.
  5. Select the microflow you just created.
  6. Now click OK.
  7. When you click OK, you'll be prompted to auto-detect columns. Simply click 'Yes' to proceed.
  8. Next, you'll be prompted to generate controllers for various Data grids. Since we won't be configuring the logic for each one this time, click 'No.'
  9. This will create a simple data grid screen as shown below.

Try it out

Now let's check if it works properly.

  1. Click the 'Publish' button to prepare the app you created. Once that's done, click 'View App' to open the app.
  2. If you see a list of Databricks data like the one below, you're all set! You've successfully created a Databricks-linked app with low code, without needing to worry about Databricks's API.

Get Started Today

Download a free 30-day trial of the CData JDBC Driver for Databricks with Mendix, and effortlessly create an app that connects to Databricks data.

Reach out to our Support Team if you have any questions.

Ready to get started?

Download a free trial of the Databricks Driver to get started:

 Download Now

Learn more:

Databricks Icon Databricks JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Databricks.