Ready to get started?

Learn more about the CData JDBC Driver for Apache Spark or download a free trial:

Download Now

Manage Spark Data in DBArtisan as a JDBC Source

Use wizards in DBArtisan to create a JDBC data source for Spark.

The CData JDBC Driver for Spark seamlessly integrates Spark data into database management tools like DBArtisan by enabling you to access Spark data as a database. This article shows how to create a JDBC source for Spark in DBArtisan. You can then edit data visually and execute standard SQL.

Integrate Spark Data into DBArtisan Projects

Follow the steps below to register Spark data as a database instance in your project:

  1. In DBArtisan, click Data Source -> Register Datasource.
  2. Select Generic JDBC.
  3. Click Manage.
  4. In the resulting dialog, click New. Enter a name for the driver and click Add. In the resulting dialog, navigate to the driver JAR. The driver JAR is located in the lib subfolder of the installation directory.
  5. In the Connection URL box, enter credentials and other required connection properties in the JDBC URL.

    Set the Server, Database, User, and Password connection properties to connect to SparkSQL.

    Built-in Connection String Designer

    For assistance in constructing the JDBC URL, use the connection string designer built into the Spark JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

    java -jar cdata.jdbc.sparksql.jar

    Fill in the connection properties and copy the connection string to the clipboard.

    Below is a typical connection string:

    jdbc:sparksql:Server=127.0.0.1;
  6. Finish the wizard to connect to Spark data. Spark entities are displayed in the Datasource Explorer.

You can now work with Spark data as you work with any other database. See the driver help documentation for more information on the queries supported by the Spark API.