Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →How to connect and process Drip Data from Azure Databricks
Use CData, Azure, and Databricks to perform data engineering and data science on live Drip Data
Databricks is a cloud-based service that provides data processing capabilities through Apache Spark. When paired with the CData JDBC Driver, customers can use Databricks to perform data engineering and data science on live Drip data. This article walks through hosting the CData JDBC Driver in Azure, as well as connecting to and processing live Drip data in Databricks.
With built-in optimized data processing, the CData JDBC Driver offers unmatched performance for interacting with live Drip data. When you issue complex SQL queries to Drip, the driver pushes supported SQL operations, like filters and aggregations, directly to Drip and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Its built-in dynamic metadata querying allows you to work with and analyze Drip data using native data types.
Install the CData JDBC Driver in Azure
To work with live Drip data in Databricks, install the driver on your Azure cluster.
- Navigate to your Databricks administration screen and select the target cluster.
- On the Libraries tab, click "Install New."
- Select "Upload" as the Library Source and "Jar" as the Library Type.
- Upload the JDBC JAR file (cdata.jdbc.api.jar) from the installation location (typically C:\Program Files\CData[product_name]\lib).
Connect to Drip from Databricks
With the JAR file installed, we are ready to work with live Drip data in Databricks. Start by creating a new notebook in your workspace. Name the notebook, select Python as the language (though Scala is available as well), and choose the cluster where you installed the JDBC driver. When the notebook launches, we can configure the connection, query Drip, and create a basic report.
Configure the Connection to Drip
Connect to Drip by referencing the class for the JDBC Driver and constructing a connection string to use in the JDBC URL. Additionally, you will need to set the RTK property in the JDBC URL (unless you are using a Beta driver). You can view the licensing file included in the installation for information on how to set this property.
driver = "cdata.jdbc.api.APIDriver" url = "jdbc:api:RTK=5246...;Profile=C:\profiles\Drip.apip;ProfileSettings='APIKey=my_api_token';"
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Drip JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.api.jar
Fill in the connection properties and copy the connection string to the clipboard.
Start by setting the Profile connection property to the location of the Drip Profile on disk (e.g. C:\profiles\Drip.apip). Next, set the ProfileSettings connection property to the connection string for Drip (see below).
Drip API Profile Settings
To use Token Authentication, specify your APIKey within the ProfileSettings connection property. The APIKey should be set to your Drip personal API Token.
Load Drip Data
Once the connection is configured, you can load Drip data as a dataframe using the CData JDBC Driver and the connection information.
remote_table = spark.read.format ( "jdbc" ) \ .option ( "driver" , driver) \ .option ( "url" , url) \ .option ( "dbtable" , "Broadcasts") \ .load ()
Display Drip Data
Check the loaded Drip data by calling the display function.
display (remote_table.select ("Id"))
Analyze Drip Data in Azure Databricks
If you want to process data with Databricks SparkSQL, register the loaded data as a Temp View.
remote_table.createOrReplaceTempView ( "SAMPLE_VIEW" )
The SparkSQL below retrieves the Drip data for analysis.
% sql SELECT Id, Name FROM Broadcasts WHERE Status = 'scheduled'
The data from Drip is only available in the target notebook. If you want to use it with other users, save it as a table.
remote_table.write.format ( "parquet" ) .saveAsTable ( "SAMPLE_TABLE" )
Download a free, 30-day trial of the CData API Driver for JDBC and start working with your live Drip data in Azure Databricks. Reach out to our Support Team if you have any questions.