We are proud to share our inclusion in the 2024 Gartner Magic Quadrant for Data Integration Tools. We believe this recognition reflects the differentiated business outcomes CData delivers to our customers.
Get the Report →How to connect and process Printify Data from Azure Databricks
Use CData, Azure, and Databricks to perform data engineering and data science on live Printify Data
Databricks is a cloud-based service that provides data processing capabilities through Apache Spark. When paired with the CData JDBC Driver, customers can use Databricks to perform data engineering and data science on live Printify data. This article walks through hosting the CData JDBC Driver in Azure, as well as connecting to and processing live Printify data in Databricks.
With built-in optimized data processing, the CData JDBC driver offers unmatched performance for interacting with live Printify data. When you issue complex SQL queries to Printify, the driver pushes supported SQL operations, like filters and aggregations, directly to Printify and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Its built-in dynamic metadata querying allows you to work with and analyze Printify data using native data types.
Install the CData JDBC Driver in Azure
To work with live Printify data in Databricks, install the driver on your Azure cluster.
- Navigate to your Databricks administration screen and select the target cluster.
- On the Libraries tab, click "Install New."
- Select "DBFS" as the Library Source and "JAR" as the Library Type.
- Upload the JDBC JAR file (cdata.jdbc.api.jar) from the installation location (typically C:\Program Files\CData\CData API Driver for JDBC\lib).
Connect to Printify from Databricks
With the JAR file installed, we are ready to work with live Printify data in Databricks. Start by creating a new notebook in your workspace. Name the workbook, make sure Python is selected as the language (which should be by default), click on Connect and under General Compute select the cluster where you installed the JDBC driver (should be selected by default).

Configure the Connection to Printify
Connect to Printify by referencing the class for the JDBC Driver and constructing a connection string to use in the JDBC URL. Additionally, you will need to set the RTK property in the JDBC URL (unless you are using a Beta driver). You can view the licensing file included in the installation for information on how to set this property.
driver = "cdata.jdbc.api.APIDriver" url = "jdbc:api:RTK=5246...;Profile=C:\profiles\Printify.apip;ProfileSettings='APIKey=your_personal_token';"
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Printify JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.api.jar
Fill in the connection properties and copy the connection string to the clipboard.
Start by setting the Profile connection property to the location of the Printify Profile on disk (e.g. C:\profiles\Profile.apip). Next, set the ProfileSettings connection property to the connection string for Printify (see below).
Printify API Profile Settings
In order to authenticate to Printify, you'll need to provide your API Key. To get your API Key navigate to My Profile, then Connections. In the Connections section you will be able to generate your Personal Access Token (API Key) and set your Token Access Scopes. Personal Access Tokens are valid for one year. An expired Personal Access Token can be re-generated using the same steps after it expires. Set the API Key to your Personal Access Token in the ProfileSettings property to connect.

Load Printify Data
Once the connection is configured, you can load Printify data as a dataframe using the CData JDBC Driver and the connection information.
remote_table = spark.read.format ( "jdbc" ) \ .option ( "driver" , driver) \ .option ( "url" , url) \ .option ( "dbtable" , "Tags") \ .load ()
Display Printify Data
Check the loaded Printify data by calling the display function.
display (remote_table.select ("Id"))

Analyze Printify Data in Azure Databricks
If you want to process data with Databricks SparkSQL, register the loaded data as a Temp View.
remote_table.createOrReplaceTempView ( "SAMPLE_VIEW" )
The SparkSQL below retrieves the Printify data for analysis.
result = spark.sql("SELECT Id, ShippingMethod FROM SAMPLE_VIEW WHERE Status = 'pending'")
The data from Printify is only available in the target notebook. If you want to use it with other users, save it as a table.
remote_table.write.format ( "parquet" ) .saveAsTable ( "SAMPLE_TABLE" )

Download a free, 30-day trial of the CData API Driver for JDBC and start working with your live Printify data in Azure Databricks. Reach out to our Support Team if you have any questions.