Model Context Protocol (MCP) finally gives AI models a way to access the business data needed to make them really useful at work. CData MCP Servers have the depth and performance to make sure AI has access to all of the answers.
Try them now for free →Migrating data from Bitbucket to Databricks using CData SSIS Components.
Easily push Bitbucket data to Databricks using the CData SSIS Tasks for Bitbucket and Databricks.
Databricks is a unified data analytics platform that allows organizations to easily process, analyze, and visualize large amounts of data. It combines data engineering, data science, and machine learning capabilities in a single platform, making it easier for teams to collaborate and derive insights from their data.
The CData SSIS Components enhance SQL Server Integration Services by enabling users to easily import and export data from various sources and destinations.
In this article, we explore the data type mapping considerations when exporting to Databricks and walk through how to migrate Bitbucket data to Databricks using the CData SSIS Components for Bitbucket and Databricks.
Data Type Mapping
Databricks Schema | CData Schema |
---|---|
int, integer, int32 |
int |
smallint, short, int16 |
smallint |
double, float, real |
float |
date |
date |
datetime, timestamp |
datetime |
time, timespan |
time |
string, varchar |
If length > 4000: nvarchar(max), Otherwise: nvarchar(length) |
long, int64, bigint |
bigint |
boolean, bool |
tinyint |
decimal, numeric |
decimal |
uuid |
nvarchar(length) |
binary, varbinary, longvarbinary |
binary(1000) or varbinary(max) after SQL Server 2000 |
Special Considerations
- String/VARCHAR: String columns from Databricks can map to different data types depending on the length of the column. If the column length exceeds 4000, then the column is mapped to nvarchar (max). Otherwise, the column is mapped to nvarchar (length).
- DECIMAL Databricks supports DECIMAL types up to 38 digits of precision, but any source column beyond that can cause load errors.
Prerequisites
- Visual Studio 2022
- SQL Server Integration Services Projects extension for Visual Studio 2022
- CData SSIS Components for Databricks
- CData SSIS Components for Bitbucket
Create the project and add components
-
Open Visual Studio and create a new Integration Services Project.
- Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task.
-
Add a CData Bitbucket Source control and a CData Databricks Destination control to the data flow task.
Configure the Bitbucket source
Follow the steps below to specify properties required to connect to Bitbucket.
-
Double-click the CData Bitbucket Source to open the source component editor and add a new connection.
-
In the CData Bitbucket Connection Manager, configure the connection properties, then test and save the connection.
For most queries, you must set the Workspace. The only exception to this is the Workspaces table, which does not require this property to be set, as querying it provides a list of workspace slugs that can be used to set Workspace. To query this table, you must set Schema to 'Information' and execute the query SELECT * FROM Workspaces>.
Setting Schema to 'Information' displays general information. To connect to Bitbucket, set these parameters:
- Schema: To show general information about a workspace, such as its users, repositories, and projects, set this to Information. Otherwise, set this to the schema of the repository or project you are querying. To get a full set of available schemas, query the sys_schemas table.
- Workspace: Required if you are not querying the Workspaces table. This property is not required for querying the Workspaces table, as that query only returns a list of workspace slugs that can be used to set Workspace.
Authenticating to Bitbucket
Bitbucket supports OAuth authentication only. To enable this authentication from all OAuth flows, you must create a custom OAuth application, and set AuthScheme to OAuth.
Be sure to review the Help documentation for the required connection properties for you specific authentication needs (desktop applications, web applications, and headless machines).
Creating a custom OAuth application
From your Bitbucket account:
- Go to Settings (the gear icon) and select Workspace Settings.
- In the Apps and Features section, select OAuth Consumers.
- Click Add Consumer.
- Enter a name and description for your custom application.
- Set the callback URL:
- For desktop applications and headless machines, use http://localhost:33333 or another port number of your choice. The URI you set here becomes the CallbackURL property.
- For web applications, set the callback URL to a trusted redirect URL. This URL is the web location the user returns to with the token that verifies that your application has been granted access.
- If you plan to use client credentials to authenticate, you must select This is a private consumer. In the driver, you must set AuthScheme to client.
- Select which permissions to give your OAuth application. These determine what data you can read and write with it.
- To save the new custom application, click Save.
- After the application has been saved, you can select it to view its settings. The application's Key and Secret are displayed. Record these for future use. You will use the Key to set the OAuthClientId and the Secret to set the OAuthClientSecret.
-
After saving the connection, select "Table or view" and select the table or view to export into Databricks, then close the CData Bitbucket Source Editor.
Configure the Databricks destination
With the Bitbucket Source configured, we can configure the Databricks connection and map the columns.
-
Double-click the CData Databricks Destination to open the destination component editor and add a new connection.
-
In the CData Databricks Connection Manager, configure the connection properties, then test and save the connection. To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
Other helpful connection properties
- QueryPassthrough: When this is set to True, queries are passed through directly to Databricks.
- ConvertDateTimetoGMT: When this is set to True, the components will convert date-time values to GMT, instead of the local time of the machine.
- UseUploadApi: Setting this property to true will improve performance if there is a large amount of data in a Bulk INSERT operation.
- UseCloudFetch: This option specifies whether to use CloudFetch to improve query efficiency when the table contains over one million entries.
-
After saving the connection, select a table in the Use a Table menu and in the Action menu, select Insert.
-
On the Column Mappings tab, configure the mappings from the input columns to the destination columns.
Run the project
You can now run the project. After the SSIS Task has finished executing, data from your SQL table will be exported to the chosen table.