Connect Workday to your favorite reporting tools without moving data.
Learn More →Pipe Databricks Data to CSV in PowerShell
Use standard PowerShell cmdlets to access Databricks tables.
The CData Cmdlets Module for Databricks is a standard PowerShell module offering straightforward integration with Databricks. Below, you will find examples of using our Databricks Cmdlets with native PowerShell cmdlets.
Creating a Connection to Your Databricks Data
To connect to a Databricks cluster, set the properties as described below.
Personal Access Token
To authenticate using a Personal Access Token, set the following:
- AuthScheme: Set this to PersonalAccessToken.
- Token: The token used to access the Databricks server. It can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab.
Azure Active Directory
To authenticate to Databricks using Azure Service Principal:
- AuthScheme: Set this to AzureServicePrincipal.
- AzureTenantId: Set this to the tenant ID of your Microsoft Azure Active Directory.
- AzureClientId: Set to the application (client) ID of your Microsoft Azure Active Directory application.
- AzureClientSecret: Set to the application (client) secret of your Microsoft Azure Active Directory application.
- AzureSubscriptionId: Set this to the Subscription Id of your Microsoft Azure Databricks Service Workspace.
- AzureResourceGroup: Set this to the Resource Group name of your Microsoft Azure Databricks Service Workspace.
- AzureWorkspace: Set this to the name of your Microsoft Azure Databricks Service Workspace.
Connecting to Databricks
To connect to a Databricks cluster, set the properties as described below.
Note: You can find the required values in your Databricks instance by navigating to Clusters and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Database: Set to the name of the Databricks database.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (you can obtain this value by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
Cloud Storage Configuration
The provider supports DBFS, Azure Blob Storage, and AWS S3 for uploading CSV files.
DBFS Cloud Storage
To use DBFS for cloud storage, set the following:
- CloudStorageType: Set this to DBFS.
Azure Blob Storage
Set the following to use Azure Blob Storage for cloud storage:
- CloudStorageType: Set this to Azure Blob storage.
- StoreTableInCloud: Set this to True to store tables in cloud storage when creating a new table.
- AzureStorageAccount: Set this to the name of your Azure storage account.
- AzureAccessKey: Set to the storage key associated with your Databricks account. Find this via the azure portal (using the root acoount). Select your storage account and click Access Keys to find this value.
- AzureBlobContainer: Set to the name of you Azure Blob storage container.
AWS S3
Set the following to use AWS S3 for cloud storage:
- CloudStorageType: Set this to AWS S3.
- StoreTableInCloud: Set this to True to store tables in cloud storage when creating a new table.
- AWSAccessKey: The AWS account access key. This value is accessible from your AWS security credentials page.
- AWSSecretKey: Your AWS account secret key. This value is accessible from your AWS security credentials page.
- AWSS3Bucket: Set to the name of your AWS S3 bucket.
- AWSRegion: The hosting region for your Amazon Web Services. You can obtain the AWS Region value by navigating to the Buckets List page of your Amazon S3 service, for example, us-east-1.
$conn = Connect-Databricks -AuthScheme "$AuthScheme" -Server "$Server" -HTTPPath "$HTTPPath" -Token "$Token" -Database "$Database"
Selecting Data
Follow the steps below to retrieve data from the Customers table and pipe the result into to a CSV file:
Select-Databricks -Connection $conn -Table Customers | Select -Property * -ExcludeProperty Connection,Table,Columns | Export-Csv -Path c:\myCustomersData.csv -NoTypeInformation
You will notice that we piped the results from Select-Databricks into a Select-Object cmdlet and excluded some properties before piping them into an Export-Csv cmdlet. We do this because the CData Cmdlets append Connection, Table, and Columns information onto each "row" in the result set, and we do not necessarily want that information in our CSV file.
The Connection, Table, and Columns are appended to the results in order to facilitate piping results from one of the CData Cmdlets directly into another one.Deleting Data
The following line deletes any records that match the criteria:
Select-Databricks -Connection $conn -Table Customers -Where "Country = US" | Remove-Databricks
Inserting and Updating Data
The cmdlets make data transformation easy as well as data cleansing. The following example loads data from a CSV file into Databricks, checking first whether a record already exists and needs to be updated instead of inserted.
Import-Csv -Path C:\MyCustomersUpdates.csv | %{ $record = Select-Databricks -Connection $Databricks -Table Customers -Where ("Id = `'"+$_.Id+"`'") if($record){ Update-Databricks -Connection $databricks -Table Customers -Columns ("City","CompanyName") -Values ($_.City, $_.CompanyName) -Where ("Id = `'"+$_.Id+"`'") }else{ Add-Databricks -Connection $databricks -Table Customers -Columns ("City","CompanyName") -Values ($_.City, $_.CompanyName) } }
As always, our goal is to simplify the way you connect to data. With cmdlets users can install a data module, set the connection properties, and start building. Download Cmdlets and start working with your data in PowerShell today!