Ready to get started?

Download a free trial of the Apache Spark Driver to get started:

 Download Now

Learn more:

Apache Spark Icon Apache Spark JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Apache Spark.

Access Spark Data in Mule Applications Using the CData JDBC Driver



Create a simple Mule Application that uses HTTP and SQL with the CData JDBC Driver for Apache Spark to create a JSON endpoint for Spark data.

The CData JDBC Driver for Apache Spark connects Spark data to Mule applications enabling read , write, update, and delete functionality with familiar SQL queries. The JDBC Driver allows users to easily create Mule applications to backup, transform, report, and analyze Spark data.

This article demonstrates how to use the CData JDBC Driver for Apache Spark inside of a Mule project to create a Web interface for Spark data. The application created allows you to request Spark data using an HTTP request and have the results returned as JSON. The exact same procedure outlined below can be used with any CData JDBC Driver to create a Web interface for the 200+ available data sources.

  1. Create a new Mule Project in Anypoint Studio.
  2. Add an HTTP Connector to the Message Flow.
  3. Configure the address for the HTTP Connector.
  4. Add a Database Select Connector to the same flow, after the HTTP Connector.
  5. Create a new Connection (or edit an existing one) and configure the properties.
    • Set Connection to "Generic Connection"
    • Select the CData JDBC Driver JAR file in the Required Libraries section (e.g. cdata.jdbc.sparksql.jar).
    • Set the URL to the connection string for Spark

      Set the Server, Database, User, and Password connection properties to connect to SparkSQL.

      Built-in Connection String Designer

      For assistance in constructing the JDBC URL, use the connection string designer built into the Spark JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

      java -jar cdata.jdbc.sparksql.jar

      Fill in the connection properties and copy the connection string to the clipboard.

    • Set the Driver class name to cdata.jdbc.sparksql.SparkSQLDriver.
    • Click Test Connection.
  6. Set the SQL Query Text to a SQL query to request Spark data. For example: SELECT City, Balance FROM Customers
  7. Add a Transform Message Component to the flow.
  8. Set the Output script to the following to convert the payload to JSON:
    %dw 2.0
    output application/json
    ---
    payload
            
  9. To view your Spark data, navigate to the address you configured for the HTTP Connector (localhost:8081 by default): http://localhost:8081. The Spark data is available as JSON in your Web browser and any other tools capable of consuming JSON endpoints.

At this point, you have a simple Web interface for working with Spark data (as JSON data) in custom apps and a wide variety of BI, reporting, and ETL tools. Download a free, 30 day trial of the JDBC Driver for Spark and see the CData difference in your Mule Applications today.