Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →How to import Databricks Data into Apache Solr
Use the CData JDBC Driver for Databricks in Data Import Handler and create an automated import of Databricks data to Apache Solr Enterprise Search platform.
The Apache Solr platform is a popular, blazing-fast, open source enterprise search solution built on Apache Lucene.
Apache Solr is equipped with the Data Import Handler (DIH), which can import data from databases and, XML, CSV, and JSON files. When paired with the CData JDBC Driver for Databricks, you can easily import Databricks data to Apache Solr. In this article, we show step-by-step how to use CData JDBC Driver in Apache Solr Data Import Handler and import Databricks data for use in enterprise search.
About Databricks Data Integration
Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:
- Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
- Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
- Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
- Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.
While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.
Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.
Getting Started
Create an Apache Solr Core and a Schema for Importing Databricks
- Run Apache Solr and create a Core.
> solr create -c CDataCore
For this article, Solr is running as a standalone instance in the local environment and you can access the core at this URL: http://localhost:8983/solr/#/CDataCore/core-overview - Create a schema consisting of "field" objects to represent the columns of the Databricks data to be imported and a unique key for the entity. LastModifiedDate, if it exists in Databricks, is used for incremental updates. If it does not exist, you cannot do the deltaquery in the later section. Save the schema in the managed-schema file created by Apache Solr.
- Install the CData Databricks JDBC Driver. Copy the JAR and license file (cdata.databricks.jar and cdata.jdbc.databricks.lic) to the Solr directory.
- CData JDBC JAR file: C:\Program Files\CData\CData JDBC Driver for Databricks ####\lib
- Apache Solr: solr-8.5.2\server\lib
DatabricksUniqueKey
Now we are ready to use Databricks data in Solr.
Define an Import of Databricks to Apache Solr
In this section, we walk through configuring the Data Import Handler.
- Modify the Config file of the created Core. Add the JAR file reference and add the DIH RequestHander definition.
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" /> <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">solr-data-config.xml</str> </lst> </requestHandler>
- Next, create a solr-data-config.xml at the same level. In this article, we retrieve a table from Databricks, but you can use a custom SQL query to request data as well. The Driver Class and a sample JDBC Connection string are in the sample code below.
<dataConfig> <dataSource driver="cdata.jdbc.databricks.DatabricksDriver" url="jdbc:databricks:Server=127.0.0.1;Port=443;TransportMode=HTTP;HTTPPath=MyHTTPPath;UseSSL=True;User=MyUser;Password=MyPassword;"> </dataSource> <document> <entity name="Customers" query="SELECT Id,DatabricksColumn1,DatabricksColumn2,DatabricksColumn3,DatabricksColumn4,DatabricksColumn5,DatabricksColumn6,DatabricksColumn7,LastModifiedDate FROM Customers" deltaQuery="SELECT Id FROM Customers where LastModifiedDate >= '${dataimporter.last_index_time}'" deltaImportQuery="SELECT Id,DatabricksColumn1,DatabricksColumn2,DatabricksColumn3,DatabricksColumn4,DatabricksColumn5,DatabricksColumn6,DatabricksColumn7,LastModifiedDate FROM Customers where Id=${dataimporter.delta.Id}"> <field column="Id" name="Id" ></field> <field column="DatabricksColumn1" name="DatabricksColumn1" ></field> <field column="DatabricksColumn2" name="DatabricksColumn2" ></field> <field column="DatabricksColumn3" name="DatabricksColumn3" ></field> <field column="DatabricksColumn4" name="DatabricksColumn4" ></field> <field column="DatabricksColumn5" name="DatabricksColumn5" ></field> <field column="DatabricksColumn6" name="DatabricksColumn6" ></field> <field column="DatabricksColumn7" name="DatabricksColumn7" ></field> <field column="LastModifiedDate" name="LastModifiedDate" ></field> </entity> </document> </dataConfig>
- In the query section, set the SQL query that select the data from Databricks. deltaQuery and deltaImportquery define the ID and the conditions when using incremental updates from the second import of the same entity.
- After all settings are done, restart Solr.
> solr stop -all > solr start
Run a DataImport of Databricks Data.
- Execute DataImport from the URL below:
http://localhost:8983/solr/#/CDataCore/dataimport//dataimport - Select the "full-import" Command, choose the table from Entity, and click "Execute."
- Check the result of the import from the Query.
- Try an incremental update using deltaQuery. Modify some data in the original Databricks data set. Select the "delta-import" command this time from DataImport window and click "Execute."
- Check the result of the incremental update.
Using the CData JDBC Driver for Databricks you are able to create an automated import of Databricks data into Apache Solr. Download a free, 30 day trial of any of the 200+ CData JDBC Drivers and get started today.