Model Context Protocol (MCP) finally gives AI models a way to access the business data needed to make them really useful at work. CData MCP Servers have the depth and performance to make sure AI has access to all of the answers.
Try them now for free →Migrating data from AlloyDB to Databricks using CData SSIS Components.
Easily push AlloyDB data to Databricks using the CData SSIS Tasks for AlloyDB and Databricks.
Databricks is a unified data analytics platform that allows organizations to easily process, analyze, and visualize large amounts of data. It combines data engineering, data science, and machine learning capabilities in a single platform, making it easier for teams to collaborate and derive insights from their data.
The CData SSIS Components enhance SQL Server Integration Services by enabling users to easily import and export data from various sources and destinations.
In this article, we explore the data type mapping considerations when exporting to Databricks and walk through how to migrate AlloyDB data to Databricks using the CData SSIS Components for AlloyDB and Databricks.
Data Type Mapping
Databricks Schema | CData Schema |
---|---|
int, integer, int32 |
int |
smallint, short, int16 |
smallint |
double, float, real |
float |
date |
date |
datetime, timestamp |
datetime |
time, timespan |
time |
string, varchar |
If length > 4000: nvarchar(max), Otherwise: nvarchar(length) |
long, int64, bigint |
bigint |
boolean, bool |
tinyint |
decimal, numeric |
decimal |
uuid |
nvarchar(length) |
binary, varbinary, longvarbinary |
binary(1000) or varbinary(max) after SQL Server 2000 |
Special Considerations
- String/VARCHAR: String columns from Databricks can map to different data types depending on the length of the column. If the column length exceeds 4000, then the column is mapped to nvarchar (max). Otherwise, the column is mapped to nvarchar (length).
- DECIMAL Databricks supports DECIMAL types up to 38 digits of precision, but any source column beyond that can cause load errors.
Prerequisites
- Visual Studio 2022
- SQL Server Integration Services Projects extension for Visual Studio 2022
- CData SSIS Components for Databricks
- CData SSIS Components for AlloyDB
Create the project and add components
-
Open Visual Studio and create a new Integration Services Project.
- Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task.
-
Add a CData AlloyDB Source control and a CData Databricks Destination control to the data flow task.
Configure the AlloyDB source
Follow the steps below to specify properties required to connect to AlloyDB.
-
Double-click the CData AlloyDB Source to open the source component editor and add a new connection.
-
In the CData AlloyDB Connection Manager, configure the connection properties, then test and save the connection.
The following connection properties are usually required in order to connect to AlloyDB.
- Server: The host name or IP of the server hosting the AlloyDB database.
- User: The user which will be used to authenticate with the AlloyDB server.
- Password: The password which will be used to authenticate with the AlloyDB server.
You can also optionally set the following:
- Database: The database to connect to when connecting to the AlloyDB Server. If this is not set, the user's default database will be used.
- Port: The port of the server hosting the AlloyDB database. This property is set to 5432 by default.
Authenticating with Standard Authentication
Standard authentication (using the user/password combination supplied earlier) is the default form of authentication.
No further action is required to leverage Standard Authentication to connect.
Authenticating with pg_hba.conf Auth Schemes
There are additional methods of authentication available which must be enabled in the pg_hba.conf file on the AlloyDB server.
Find instructions about authentication setup on the AlloyDB Server here.
Authenticating with MD5 Authentication
This authentication method must be enabled by setting the auth-method in the pg_hba.conf file to md5.
Authenticating with SASL Authentication
This authentication method must be enabled by setting the auth-method in the pg_hba.conf file to scram-sha-256.
Authenticating with Kerberos
The authentication with Kerberos is initiated by AlloyDB Server when the ∏ is trying to connect to it. You should set up Kerberos on the AlloyDB Server to activate this authentication method. Once you have Kerberos authentication set up on the AlloyDB Server, see the Kerberos section of the help documentation for details on how to authenticate with Kerberos.
-
After saving the connection, select "Table or view" and select the table or view to export into Databricks, then close the CData AlloyDB Source Editor.
Configure the Databricks destination
With the AlloyDB Source configured, we can configure the Databricks connection and map the columns.
-
Double-click the CData Databricks Destination to open the destination component editor and add a new connection.
-
In the CData Databricks Connection Manager, configure the connection properties, then test and save the connection. To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
Other helpful connection properties
- QueryPassthrough: When this is set to True, queries are passed through directly to Databricks.
- ConvertDateTimetoGMT: When this is set to True, the components will convert date-time values to GMT, instead of the local time of the machine.
- UseUploadApi: Setting this property to true will improve performance if there is a large amount of data in a Bulk INSERT operation.
- UseCloudFetch: This option specifies whether to use CloudFetch to improve query efficiency when the table contains over one million entries.
-
After saving the connection, select a table in the Use a Table menu and in the Action menu, select Insert.
-
On the Column Mappings tab, configure the mappings from the input columns to the destination columns.
Run the project
You can now run the project. After the SSIS Task has finished executing, data from your SQL table will be exported to the chosen table.