We are proud to share our inclusion in the 2024 Gartner Magic Quadrant for Data Integration Tools. We believe this recognition reflects the differentiated business outcomes CData delivers to our customers.
Get the Report →How to connect and process Azure Data Lake Storage Data from Azure Databricks
Use CData, Azure, and Databricks to perform data engineering and data science on live Azure Data Lake Storage Data
Databricks is a cloud-based service that provides data processing capabilities through Apache Spark. When paired with the CData JDBC Driver, customers can use Databricks to perform data engineering and data science on live Azure Data Lake Storage data. This article walks through hosting the CData JDBC Driver in Azure, as well as connecting to and processing live Azure Data Lake Storage data in Databricks.
With built-in optimized data processing, the CData JDBC driver offers unmatched performance for interacting with live Azure Data Lake Storage data. When you issue complex SQL queries to Azure Data Lake Storage, the driver pushes supported SQL operations, like filters and aggregations, directly to Azure Data Lake Storage and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Its built-in dynamic metadata querying allows you to work with and analyze Azure Data Lake Storage data using native data types.
Install the CData JDBC Driver in Azure
To work with live Azure Data Lake Storage data in Databricks, install the driver on your Azure cluster.
- Navigate to your Databricks administration screen and select the target cluster.
- On the Libraries tab, click "Install New."
- Select "DBFS" as the Library Source and "JAR" as the Library Type.
- Upload the JDBC JAR file (cdata.jdbc.adls.jar) from the installation location (typically C:\Program Files\CData\CData JDBC Driver for Azure Data Lake Storage\lib).
Connect to Azure Data Lake Storage from Databricks
With the JAR file installed, we are ready to work with live Azure Data Lake Storage data in Databricks. Start by creating a new notebook in your workspace. Name the workbook, make sure Python is selected as the language (which should be by default), click on Connect and under General Compute select the cluster where you installed the JDBC driver (should be selected by default).

Configure the Connection to Azure Data Lake Storage
Connect to Azure Data Lake Storage by referencing the class for the JDBC Driver and constructing a connection string to use in the JDBC URL. Additionally, you will need to set the RTK property in the JDBC URL (unless you are using a Beta driver). You can view the licensing file included in the installation for information on how to set this property.
driver = "cdata.jdbc.adls.ADLSDriver" url = "jdbc:adls:RTK=5246...;Schema=ADLSGen2;Account=myAccount;FileSystem=myFileSystem;AccessKey=myAccessKey;InitiateOAuth=GETANDREFRESH"
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Azure Data Lake Storage JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.adls.jar
Fill in the connection properties and copy the connection string to the clipboard.
Authenticating to a Gen 1 DataLakeStore Account
Gen 1 uses OAuth 2.0 in Azure AD for authentication.
For this, an Active Directory web application is required. You can create one as follows:
To authenticate against a Gen 1 DataLakeStore account, the following properties are required:
- Schema: Set this to ADLSGen1.
- Account: Set this to the name of the account.
- OAuthClientId: Set this to the application Id of the app you created.
- OAuthClientSecret: Set this to the key generated for the app you created.
- TenantId: Set this to the tenant Id. See the property for more information on how to acquire this.
- Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.
Authenticating to a Gen 2 DataLakeStore Account
To authenticate against a Gen 2 DataLakeStore account, the following properties are required:
- Schema: Set this to ADLSGen2.
- Account: Set this to the name of the account.
- FileSystem: Set this to the file system which will be used for this account.
- AccessKey: Set this to the access key which will be used to authenticate the calls to the API. See the property for more information on how to acquire this.
- Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.

Load Azure Data Lake Storage Data
Once the connection is configured, you can load Azure Data Lake Storage data as a dataframe using the CData JDBC Driver and the connection information.
remote_table = spark.read.format ( "jdbc" ) \ .option ( "driver" , driver) \ .option ( "url" , url) \ .option ( "dbtable" , "Resources") \ .load ()
Display Azure Data Lake Storage Data
Check the loaded Azure Data Lake Storage data by calling the display function.
display (remote_table.select ("FullPath"))

Analyze Azure Data Lake Storage Data in Azure Databricks
If you want to process data with Databricks SparkSQL, register the loaded data as a Temp View.
remote_table.createOrReplaceTempView ( "SAMPLE_VIEW" )
The SparkSQL below retrieves the Azure Data Lake Storage data for analysis.
result = spark.sql("SELECT FullPath, Permission FROM SAMPLE_VIEW WHERE Type = 'FILE'")
The data from Azure Data Lake Storage is only available in the target notebook. If you want to use it with other users, save it as a table.
remote_table.write.format ( "parquet" ) .saveAsTable ( "SAMPLE_TABLE" )

Download a free, 30-day trial of the CData JDBC Driver for Azure Data Lake Storage and start working with your live Azure Data Lake Storage data in Azure Databricks. Reach out to our Support Team if you have any questions.