Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →Load Spark Data to a Database Using Embulk
Use CData JDBC drivers with the open source ETL/ELT tool Embulk to load Spark data to a database.
Embulk is an open source bulk data loader. When paired with the CData JDBC Driver for Apache Spark, Embulk easily loads data from Spark to any supported destination. In this article, we explain how to use the CData JDBC Driver for Apache Spark in Embulk to load Spark data to a MySQL dtabase.
With built-in optimized data processing, the CData JDBC Driver offers unmatched performance for interacting with live Spark data. When you issue complex SQL queries to Spark, the driver pushes supported SQL operations, like filters and aggregations, directly to Spark and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations).
Configure a JDBC Connection to Spark Data
Before creating a bulk load job in Embulk, note the installation location for the JAR file for the JDBC Driver (typically C:\Program Files\CData\CData JDBC Driver for Apache Spark\lib).
Embulk supports JDBC connectivity, so you can easily connect to Spark and execute SQL queries. Before creating a bulk load job, create a JDBC URL for authenticating with Spark.
Set the Server, Database, User, and Password connection properties to connect to SparkSQL.
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Spark JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.sparksql.jar
Fill in the connection properties and copy the connection string to the clipboard.
Below is a typical JDBC connection string for Spark:
jdbc:sparksql:Server=127.0.0.1;
Load Spark Data in Embulk
After installing the CData JDBC Driver and creating a JDBC connection string, install the required Embulk plugins.
Install Embulk Input & Output Plugins
- Install the JDBC Input Plugin in Embulk.
https://github.com/embulk/embulk-input-jdbc/tree/master/embulk-input-jdbc - In this article, we use MySQL as the destination database. You can also choose SQL Server, PostgreSQL, or Google BigQuery as the destination using the output Plugins.
https://github.com/embulk/embulk-output-jdbc/tree/master/embulk-output-mysqlembulk gem install embulk-output-mysql
embulk gem install embulk-input-jdbc
With the input and output plugins installed, we are ready to load Spark data into MySQL using Embulk.
Create a Job to Load Spark Data
Start by creating a config file in Embulk, using a name like sparksql-mysql.yml.
- For the input plugin options, use the CData JDBC Driver for Apache Spark, including the path to the driver JAR file, the driver class (e.g. cdata.jdbc.sparksql.SparkSQLDriver), and the JDBC URL from above
- For the output plugin options, use the values and credentials for the MySQL database
Sample Config File (sparksql-mysql.yml)
in:
type: jdbc
driver_path: C:\Program Files\CData[product_name] 20xx\lib\cdata.jdbc.sparksql.jar
driver_class: cdata.jdbc.sparksql.SparkSQLDriver
url: jdbc:sparksql:Server=127.0.0.1;
table: "Customers"
out:
type: mysql
host: localhost
database: DatabaseName
user: UserId
password: UserPassword
table: "Customers"
mode: insert
After creating the file, run the Embulk job.
embulk run sparksql-mysql.yml
After running the the Embulk job, find the Salesforce data in the MySQL table.
Load Filtered Spark Data
In addition to loading data directly from a table, you can use a custom SQL query to have more granular control of the data loaded. You can also perform increment loads by setting a last updated column in a SQL WHERE clause in the query field.
in:
type: jdbc
driver_path: C:\Program Files\CData[product_name] 20xx\lib\cdata.jdbc.sparksql.jar
driver_class: cdata.jdbc.sparksql.SparkSQLDriver
url: jdbc:sparksql:Server=127.0.0.1;
query: "SELECT City, Balance FROM Customers WHERE [RecordId] = 1"
out:
type: mysql
host: localhost
database: DatabaseName
user: UserId
password: UserPassword
table: "Customers"
mode: insert
More Information & Free Trial
By using CData JDBC Driver for Apache Spark as a connector, Embulk can integrate Spark data into your data load jobs. And with drivers for more than 200+ other enterprise sources, you can integrate any enterprise SaaS, big data, or NoSQL source as well. Download a 30-day free trial and get started today.