Integrate Azure Data Lake Storage Data in Pentaho Data Integration



Build ETL pipelines based on Azure Data Lake Storage data in the Pentaho Data Integration tool.

The CData JDBC Driver for Azure Data Lake Storage enables access to live data from data pipelines. Pentaho Data Integration is an Extraction, Transformation, and Loading (ETL) engine that data, cleanses the data, and stores data using a uniform format that is accessible.This article shows how to connect to Azure Data Lake Storage data as a JDBC data source and build jobs and transformations based on Azure Data Lake Storage data in Pentaho Data Integration.

Configure to Azure Data Lake Storage Connectivity

Authenticating to a Gen 1 DataLakeStore Account

Gen 1 uses OAuth 2.0 in Azure AD for authentication.

For this, an Active Directory web application is required. You can create one as follows:

  1. Sign in to your Azure Account through the .
  2. Select "Azure Active Directory".
  3. Select "App registrations".
  4. Select "New application registration".
  5. Provide a name and URL for the application. Select Web app for the type of application you want to create.
  6. Select "Required permissions" and change the required permissions for this app. At a minimum, "Azure Data Lake" and "Windows Azure Service Management API" are required.
  7. Select "Key" and generate a new key. Add a description, a duration, and take note of the generated key. You won't be able to see it again.

To authenticate against a Gen 1 DataLakeStore account, the following properties are required:

  • Schema: Set this to ADLSGen1.
  • Account: Set this to the name of the account.
  • OAuthClientId: Set this to the application Id of the app you created.
  • OAuthClientSecret: Set this to the key generated for the app you created.
  • TenantId: Set this to the tenant Id. See the property for more information on how to acquire this.
  • Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.

Authenticating to a Gen 2 DataLakeStore Account

To authenticate against a Gen 2 DataLakeStore account, the following properties are required:

  • Schema: Set this to ADLSGen2.
  • Account: Set this to the name of the account.
  • FileSystem: Set this to the file system which will be used for this account.
  • AccessKey: Set this to the access key which will be used to authenticate the calls to the API. See the property for more information on how to acquire this.
  • Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.

Built-in Connection String Designer

For assistance in constructing the JDBC URL, use the connection string designer built into the Azure Data Lake Storage JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

java -jar cdata.jdbc.adls.jar

Fill in the connection properties and copy the connection string to the clipboard.

When you configure the JDBC URL, you may also want to set the Max Rows connection property. This will limit the number of rows returned, which is especially helpful for improving performance when designing reports and visualizations.

Below is a typical JDBC URL:

jdbc:adls:Schema=ADLSGen2;Account=myAccount;FileSystem=myFileSystem;AccessKey=myAccessKey;InitiateOAuth=GETANDREFRESH

Save your connection string for use in Pentaho Data Integration.

Connect to Azure Data Lake Storage from Pentaho DI

Open Pentaho Data Integration and select "Database Connection" to configure a connection to the CData JDBC Driver for Azure Data Lake Storage

  1. Click "General"
  2. Set Connection name (e.g. Azure Data Lake Storage Connection)
  3. Set Connection type to "Generic database"
  4. Set Access to "Native (JDBC)"
  5. Set Custom connection URL to your Azure Data Lake Storage connection string (e.g.
    jdbc:adls:Schema=ADLSGen2;Account=myAccount;FileSystem=myFileSystem;AccessKey=myAccessKey;InitiateOAuth=GETANDREFRESH
  6. Set Custom driver class name to "cdata.jdbc.adls.ADLSDriver"
  7. Test the connection and click "OK" to save.

Create a Data Pipeline for Azure Data Lake Storage

Once the connection to Azure Data Lake Storage is configured using the CData JDBC Driver, you are ready to create a new transformation or job.

  1. Click "File" >> "New" >> "Transformation/job"
  2. Drag a "Table input" object into the workflow panel and select your Azure Data Lake Storage connection.
  3. Click "Get SQL select statement" and use the Database Explorer to view the available tables and views.
  4. Select a table and optionally preview the data for verification.

At this point, you can continue your transformation or jb by selecting a suitable destination and adding any transformations to modify, filter, or otherwise alter the data during replication.

Free Trial & More Information

Download a free, 30-day trial of the CData JDBC Driver for Azure Data Lake Storage and start working with your live Azure Data Lake Storage data in Pentaho Data Integration today.

Ready to get started?

Download a free trial of the Azure Data Lake Storage Driver to get started:

 Download Now

Learn more:

Azure Data Lake Storage Icon Azure Data Lake Storage JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Azure Data Lake Storage.