How to work with Dynamics 365 Data in Apache Spark using SQL



Access and process Dynamics 365 Data in Apache Spark using the CData JDBC Driver.

Apache Spark is a fast and general engine for large-scale data processing. When paired with the CData JDBC Driver for Dynamics 365, Spark can work with live Dynamics 365 data. This article describes how to connect to and query Dynamics 365 data from a Spark shell.

The CData JDBC Driver offers unmatched performance for interacting with live Dynamics 365 data due to optimized data processing built into the driver. When you issue complex SQL queries to Dynamics 365, the driver pushes supported SQL operations, like filters and aggregations, directly to Dynamics 365 and utilizes the embedded SQL engine to process unsupported operations (often SQL functions and JOIN operations) client-side. With built-in dynamic metadata querying, you can work with and analyze Dynamics 365 data using native data types.

About Dynamics 365 Data Integration

CData simplifies access and integration of live Microsoft Dynamics 365 data. Our customers leverage CData connectivity to:

  • Read and write data in the full Dynamics 365 ecosystem: Sales, Customer Service, Finance & Operations, Marketing, and more.
  • Extend the native features of Dynamics CRM with customizable caching and intelligent query aggregation and separation.
  • Authenticate securely with Dynamics 365 in a variety of ways, including Azure Active Directory, Azure Managed Service Identity credentials, and Azure Service Principal using either a client secret or a certificate.
  • Use SQL stored procedures to manage their Dynamics 365 entities - listing, creating, and removing associations between entities.

CData customers use our Dynamics 365 connectivity solutions for a variety of reasons, whether they're looking to replicate their data into a data warehouse (alongside other data sources) or analyze live Dynamics 365 data from their preferred data tools inside the Microsoft ecosystem (Power BI, Excel, etc.) or with external tools (Tableau, Looker, etc.).


Getting Started


Install the CData JDBC Driver for Dynamics 365

Download the CData JDBC Driver for Dynamics 365 installer, unzip the package, and run the JAR file to install the driver.

Start a Spark Shell and Connect to Dynamics 365 Data

  1. Open a terminal and start the Spark shell with the CData JDBC Driver for Dynamics 365 JAR file as the jars parameter: $ spark-shell --jars /CData/CData JDBC Driver for Dynamics 365/lib/cdata.jdbc.dynamics365.jar
  2. With the shell running, you can connect to Dynamics 365 with a JDBC URL and use the SQL Context load() function to read a table.

    Edition and OrganizationUrl are required connection properties. The Dynamics 365 connector supports connecting to the following editions: CustomerService, FieldService, FinOpsOnline, FinOpsOnPremise, HumanResources, Marketing, ProjectOperations and Sales.

    For Dynamics 365 Business Central, use the separate Dynamics 365 Business Central driver.

    OrganizationUrl is the URL to your Dynamics 365 organization. For instance, https://orgcb42e1d0.crm.dynamics.com

    Built-in Connection String Designer

    For assistance in constructing the JDBC URL, use the connection string designer built into the Dynamics 365 JDBC Driver. Either double-click the JAR file or execute the jar file from the command-line.

    java -jar cdata.jdbc.dynamics365.jar

    Fill in the connection properties and copy the connection string to the clipboard.

    Configure the connection to Dynamics 365, using the connection string generated above.

    scala> val dynamics365_df = spark.sqlContext.read.format("jdbc").option("url", "jdbc:dynamics365:OrganizationUrl=https://myaccount.operations.dynamics.com/;Edition=Sales;").option("dbtable","GoalHeadings").option("driver","cdata.jdbc.dynamics365.Dynamics365Driver").load()
  3. Once you connect and the data is loaded you will see the table schema displayed.
  4. Register the Dynamics 365 data as a temporary table:

    scala> dynamics365_df.registerTable("goalheadings")
  5. Perform custom SQL queries against the Data using commands like the one below:

    scala> dynamics365_df.sqlContext.sql("SELECT GoalHeadingId, Name FROM GoalHeadings WHERE Name = MyAccount").collect.foreach(println)

    You will see the results displayed in the console, similar to the following:

Using the CData JDBC Driver for Dynamics 365 in Apache Spark, you are able to perform fast and complex analytics on Dynamics 365 data, combining the power and utility of Spark with your data. Download a free, 30 day trial of any of the 200+ CData JDBC Drivers and get started today.

Ready to get started?

Download a free trial of the Dynamics 365 Driver to get started:

 Download Now

Learn more:

Dynamics 365 Icon Dynamics 365 JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Dynamics 365.