Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →Process & Analyze Elasticsearch Data in Databricks (AWS)
Use CData, AWS, and Databricks to perform data engineering and data science on live Elasticsearch Data.
Databricks is a cloud-based service that provides data processing capabilities through Apache Spark. When paired with the CData JDBC Driver, customers can use Databricks to perform data engineering and data science on live Elasticsearch data. This article walks through hosting the CData JDBC Driver in AWS, as well as connecting to and processing live Elasticsearch data in Databricks.
With built-in optimized data processing, the CData JDBC Driver offers unmatched performance for interacting with live Elasticsearch data. When you issue complex SQL queries to Elasticsearch, the driver pushes supported SQL operations, like filters and aggregations, directly to Elasticsearch and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Its built-in dynamic metadata querying allows you to work with and analyze Elasticsearch data using native data types.
Install the CData JDBC Driver in Databricks
To work with live Elasticsearch data in Databricks, install the driver on your Databricks cluster.
- Navigate to your Databricks administration screen and select the target cluster.
- On the Libraries tab, click "Install New."
- Select "Upload" as the Library Source and "Jar" as the Library Type.
- Upload the JDBC JAR file (cdata.jdbc.elasticsearch.jar) from the installation location (typically C:\Program Files\CData[product_name]\lib).
Access Elasticsearch Data in your Notebook: Python
With the JAR file installed, we are ready to work with live Elasticsearch data in Databricks. Start by creating a new notebook in your workspace. Name the notebook, select Python as the language (though Scala is available as well), and choose the cluster where you installed the JDBC driver. When the notebook launches, we can configure the connection, query Elasticsearch, and create a basic report.
Configure the Connection to Elasticsearch
Connect to Elasticsearch by referencing the JDBC Driver class and constructing a connection string to use in the JDBC URL. Additionally, you will need to set the RTK property in the JDBC URL (unless you are using a Beta driver). You can view the licensing file included in the installation for information on how to set this property.
Step 1: Connection Information
driver = "cdata.jdbc.elasticsearch.ElasticsearchDriver" url = "jdbc:elasticsearch:RTK=5246...;Server=127.0.0.1;Port=9200;User=admin;Password=123456;"
Built-in Connection String Designer
For assistance in constructing the JDBC URL, use the connection string designer built into the Elasticsearch JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.
java -jar cdata.jdbc.elasticsearch.jar
Fill in the connection properties and copy the connection string to the clipboard.
Set the Server and Port connection properties to connect. To authenticate, set the User and Password properties, PKI (public key infrastructure) properties, or both. To use PKI, set the SSLClientCert, SSLClientCertType, SSLClientCertSubject, and SSLClientCertPassword properties.
The data provider uses X-Pack Security for TLS/SSL and authentication. To connect over TLS/SSL, prefix the Server value with 'https://'. Note: TLS/SSL and client authentication must be enabled on X-Pack to use PKI.
Once the data provider is connected, X-Pack will then perform user authentication and grant role permissions based on the realms you have configured.
Load Elasticsearch Data
Once you configure the connection, you can load Elasticsearch data as a dataframe using the CData JDBC Driver and the connection information.
Step 2: Reading the data
remote_table = spark.read.format ( "jdbc" ) \ .option ( "driver" , driver) \ .option ( "url" , url) \ .option ( "dbtable" , "Orders") \ .load ()
Display Elasticsearch Data
Check the loaded Elasticsearch data by calling the display function.
Step 3: Checking the result
display (remote_table.select ("OrderName"))
Analyze Elasticsearch Data in Databricks
If you want to process data with Databricks SparkSQL, register the loaded data as a Temp View.
Step 4: Create a view or table
remote_table.createOrReplaceTempView ( "SAMPLE_VIEW" )
With the Temp View created, you can use SparkSQL to retrieve the Elasticsearch data for reporting, visualization, and analysis.
% sql SELECT OrderName, Freight FROM SAMPLE_VIEW ORDER BY Freight DESC LIMIT 5
The data from Elasticsearch is only available in the target notebook. If you want to use it with other users, save it as a table.
remote_table.write.format ( "parquet" ) .saveAsTable ( "SAMPLE_TABLE" )
Download a free, 30-day trial of the CData JDBC Driver for Elasticsearch and start working with your live Elasticsearch data in Databricks. Reach out to our Support Team if you have any questions.