Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →Create Informatica Mappings From/To a JDBC Data Source for Spark
Create Spark data objects in Informatica using the standard JDBC connection process: Copy the JAR and then connect.
Informatica provides a powerful, elegant means of transporting and transforming your data. By utilizing the CData JDBC Driver for Spark, you are gaining access to a driver based on industry-proven standards that integrates seamlessly with Informatica's powerful data transportation and manipulation features. This tutorial shows how to transfer and browse Spark data in Informatica PowerCenter.
Deploy the Driver
To deploy the driver to the Informatica PowerCenter server, copy the CData JAR and .lic file, located in the lib subfolder in the installation directory, to the following folder: Informatica-installation-directory\services\shared\jars\thirdparty.
To work with Spark data in the Developer tool, you will need to copy the CData JAR and .lic file, located in the lib subfolder in the installation directory, into the following folders:
- Informatica-installation-directory\client\externaljdbcjars
- Informatica-installation-directory\externaljdbcjars
Create the JDBC Connection
Follow the steps below to connect from Informatica Developer:
- In the Connection Explorer pane, right-click your domain and click Create a Connection.
- In the New Database Connection wizard that is displayed, enter a name and Id for the connection and in the Type menu select JDBC.
- In the JDBC Driver Class Name property, enter:
cdata.jdbc.sparksql.SparkSQLDriver
- In the Connection String property, enter the JDBC URL, using the connection properties for Spark.
Set the Server, Database, User, and Password connection properties to connect to SparkSQL.
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Spark JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.sparksql.jar
Fill in the connection properties and copy the connection string to the clipboard.
A typical connection string is below:
jdbc:sparksql:Server=127.0.0.1;
Browse Spark Tables
After you have added the driver JAR to the classpath and created a JDBC connection, you can now access Spark entities in Informatica. Follow the steps below to connect to Spark and browse Spark tables:
- Connect to your repository.
- In the Connection Explorer, right-click the connection and click Connect.
- Clear the Show Default Schema Only option.
You can now browse Spark tables in the Data Viewer: Right-click the node for the table and then click Open. On the Data Viewer view, click Run.
Create Spark Data Objects
Follow the steps below to add Spark tables to your project:
- Select tables in Spark, then right-click a table in Spark, and click Add to Project.
- In the resulting dialog, select the option to create a data object for each resource.
- In the Select Location dialog, select your project.
Create a Mapping
Follow the steps below to add the Spark source to a mapping:
- In the Object Explorer, right-click your project and then click New -> Mapping.
- Expand the node for the Spark connection and then drag the data object for the table onto the editor.
- In the dialog that appears, select the Read option.
Follow the steps below to map Spark columns to a flat file:
- In the Object Explorer, right-click your project and then click New -> Data Object.
- Select Flat File Data Object -> Create as Empty -> Fixed Width.
- In the properties for the Spark object, select the rows you want, right-click, and then click copy. Paste the rows into the flat file properties.
- Drag the flat file data object onto the mapping. In the dialog that appears, select the Write option.
- Click and drag to connect columns.
To transfer Spark data, right-click in the workspace and then click Run Mapping.