Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →How to work with Azure Data Lake Storage Data in Apache Spark using SQL
Access and process Azure Data Lake Storage Data in Apache Spark using the CData JDBC Driver.
Apache Spark is a fast and general engine for large-scale data processing. When paired with the CData JDBC Driver for Azure Data Lake Storage, Spark can work with live Azure Data Lake Storage data. This article describes how to connect to and query Azure Data Lake Storage data from a Spark shell.
The CData JDBC Driver offers unmatched performance for interacting with live Azure Data Lake Storage data due to optimized data processing built into the driver. When you issue complex SQL queries to Azure Data Lake Storage, the driver pushes supported SQL operations, like filters and aggregations, directly to Azure Data Lake Storage and utilizes the embedded SQL engine to process unsupported operations (often SQL functions and JOIN operations) client-side. With built-in dynamic metadata querying, you can work with and analyze Azure Data Lake Storage data using native data types.
Install the CData JDBC Driver for Azure Data Lake Storage
Download the CData JDBC Driver for Azure Data Lake Storage installer, unzip the package, and run the JAR file to install the driver.
Start a Spark Shell and Connect to Azure Data Lake Storage Data
- Open a terminal and start the Spark shell with the CData JDBC Driver for Azure Data Lake Storage JAR file as the jars parameter:
$ spark-shell --jars /CData/CData JDBC Driver for Azure Data Lake Storage/lib/cdata.jdbc.adls.jar
- With the shell running, you can connect to Azure Data Lake Storage with a JDBC URL and use the SQL Context load() function to read a table.
Authenticating to a Gen 1 DataLakeStore Account
Gen 1 uses OAuth 2.0 in Azure AD for authentication.
For this, an Active Directory web application is required. You can create one as follows:
To authenticate against a Gen 1 DataLakeStore account, the following properties are required:
- Schema: Set this to ADLSGen1.
- Account: Set this to the name of the account.
- OAuthClientId: Set this to the application Id of the app you created.
- OAuthClientSecret: Set this to the key generated for the app you created.
- TenantId: Set this to the tenant Id. See the property for more information on how to acquire this.
- Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.
Authenticating to a Gen 2 DataLakeStore Account
To authenticate against a Gen 2 DataLakeStore account, the following properties are required:
- Schema: Set this to ADLSGen2.
- Account: Set this to the name of the account.
- FileSystem: Set this to the file system which will be used for this account.
- AccessKey: Set this to the access key which will be used to authenticate the calls to the API. See the property for more information on how to acquire this.
- Directory: Set this to the path which will be used to store the replicated file. If not specified, the root directory will be used.
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Azure Data Lake Storage JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.adls.jar
Fill in the connection properties and copy the connection string to the clipboard.
Configure the connection to Azure Data Lake Storage, using the connection string generated above.
scala> val adls_df = spark.sqlContext.read.format("jdbc").option("url", "jdbc:adls:Schema=ADLSGen2;Account=myAccount;FileSystem=myFileSystem;AccessKey=myAccessKey;").option("dbtable","Resources").option("driver","cdata.jdbc.adls.ADLSDriver").load()
- Once you connect and the data is loaded you will see the table schema displayed.
Register the Azure Data Lake Storage data as a temporary table:
scala> adls_df.registerTable("resources")
-
Perform custom SQL queries against the Data using commands like the one below:
scala> adls_df.sqlContext.sql("SELECT FullPath, Permission FROM Resources WHERE Type = FILE").collect.foreach(println)
You will see the results displayed in the console, similar to the following:
Using the CData JDBC Driver for Azure Data Lake Storage in Apache Spark, you are able to perform fast and complex analytics on Azure Data Lake Storage data, combining the power and utility of Spark with your data. Download a free, 30 day trial of any of the 200+ CData JDBC Drivers and get started today.