Connect to Azure Data Lake Storage Data in HULFT Integrate

Ready to get started?

Download for a free trial:

Download Now

Learn more:

Azure Data Lake Storage JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Azure Data Lake Storage.



Connect to Azure Data Lake Storage as a JDBC data source in HULFT Integrate

HULFT Integrate is a modern data integration platform that provides a drag-and-drop user interface to create cooperation flows, data conversion, and processing so that complex data connections are easier than ever to execute. When paired with the CData JDBC Driver for Azure Data Lake Storage, HULFT Integrate can work with live Azure Data Lake Storage data. This article walks through connecting to Azure Data Lake Storage and moving the data into a CSV file.

With built-in optimized data processing, the CData JDBC Driver offers unmatched performance for interacting with live Azure Data Lake Storage data. When you issue complex SQL queries to Azure Data Lake Storage, the driver pushes supported SQL operations, like filters and aggregations, directly to Azure Data Lake Storage and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Its built-in dynamic metadata querying allows you to work with and analyze Azure Data Lake Storage data using native data types.

Enable Access to Azure Data Lake Storage

To enable access to Azure Data Lake Storage data from HULFT Integrate projects:

  1. Copy the CData JDBC Driver JAR file (and license file if it exists), cdata.jdbc.adls.jar (and cdata.jdbc.adls.lic), to the jdbc_adapter subfolder for the Integrate Server
  2. Restart the HULFT Integrate Server and launch HULFT Integrate Studio

Build a Project with Access to Azure Data Lake Storage Data

Once you copy the JAR files, you can create a project with access to Azure Data Lake Storage data. Start by opening Integrate Studio and creating a new project.

  1. Name the project
  2. Ensure the "Create script" checkbox is checked
  3. Click Next
  4. Name the script (e.g.: ADLStoCSV)

Once you create the project, add components to the script to copy Azure Data Lake Storage data to a CSV file.

Configure an Execute Select SQL Component

Drag an "Execute Select SQL" component from the Tool Palette (Database -> JDBC) into the Script workspace.

  1. In the "Required settings" tab for the Destination, click "Add" to create a new connection for Azure Data Lake Storage. Set the following properties:
    • Name: Azure Data Lake Storage Connection Settings
    • Driver class name: cdata.jdbc.adls.ADLSDriver
    • URL: jdbc:adls:Schema=ADLSGen2;Account=myAccount;FileSystem=myFileSystem;AccessKey=myAccessKey;InitiateOAuth=GETANDREFRESH

      Built-in Connection String Designer

      For assistance constructing the JDBC URL, use the connection string designer built into the Azure Data Lake Storage JDBC Driver. Either double-click the JAR file or execute the JAR file from the command-line.

      java -jar cdata.jdbc.adls.jar

      Fill in the connection properties and copy the connection string to the clipboard.

      Authenticating to a Gen 1 DataLakeStore Account

      Gen 1 uses OAuth 2.0 in Azure AD for authentication.

      For this, an Active Directory web application is required. You can create one as follows:

      1. Sign in to your Azure Account through the .
      2. Select "Azure Active Directory".
      3. Select "App registrations".
      4. Select "New application registration".
      5. Provide a name and URL for the application. Select Web app for the type of application you want to create.
      6. Select "Required permissions" and change the required permissions for this app. At a minimum, "Azure Data Lake" and "Windows Azure Service Management API" are required.
      7. Select "Key" and generate a new key. Add a description, a duration, and take note of the generated key. You won't be able to see it again.

      To authenticate against a Gen 1 DataLakeStore account, the following properties are required:

      • Schema: Set this to ADLSGen1.
      • Account: Set this to the name of the account.
      • OAuthClientId: Set this to the application Id of the app you created.
      • OAuthClientSecret: Set this to the key generated for the app you created.
      • TenantId: Set this to the tenant Id. See the property for more information on how to acquire this.
      • Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.

      Authenticating to a Gen 2 DataLakeStore Account

      To authenticate against a Gen 2 DataLakeStore account, the following properties are required:

      • Schema: Set this to ADLSGen2.
      • Account: Set this to the name of the account.
      • FileSystem: Set this to the file system which will be used for this account.
      • AccessKey: Set this to the access key which will be used to authenticate the calls to the API. See the property for more information on how to acquire this.
      • Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.
  2. Write your SQL statement. For example:
    SELECT FullPath, Permission FROM Resources
  3. Click "Extraction test" to ensure the connection and query are configured properly
  4. Click "Execute SQL statement and set output schema"
  5. Click "Finish"

Configure a Write CSV File Component

Drag a "Write CSV File" component from the Tool Palette (File -> CSV) onto the workspace.

  1. Set a file to write the query results to (e.g. Resources.csv)
  2. Set "Input data" to the "Select SQL" component
  3. Add columns for each field selected in the SQL query
  4. In the "Write settings" tab, check the checkbox to "Insert column names into first row"
  5. Click "Finish"

Map Azure Data Lake Storage Fields to the CSV Columns

Map each column from the "Select" component to the corresponding column for the "CSV" component.

Finish the Script

Drag the "Start" component onto the "Select" component and the "CSV" component onto the "End" component. Build the script and run the script to move Azure Data Lake Storage data into a CSV file.

Download a free, 30-day trial of the CData JDBC Driver for Azure Data Lake Storage and start working with your live Azure Data Lake Storage data in HULFT Integrate. Reach out to our Support Team if you have any questions.