Ready to get started?

Learn more about the CData ODBC Driver for Apache Spark or download a free trial:

Download Now

PolyBase で外部データソースとしてSpark を連携利用

CData ODBC Driver for Spark とSQL Server 2019 のPolyBase を使って、リアルタイムSpark data に外部データソースとしてアクセス。

SQL Server のPolyBase は、データベーステーブルをクエリするTransact-SQL シンタックスを使って、外部データにクエリする仕組みです。 CData ODBC Drivers for Spark data を組み合わせて使うことで、SQL Server データと同じようにSpark data へのアクセスが可能です。 本記事では、PolyBase 外部データソースへのSpark data の設定から、T-SQL クエリを使ったSpark data へのアクセスを行います。

The CData ODBC drivers offer unmatched performance for interacting with live Spark data using PolyBase due to optimized data processing built into the driver. When you issue complex SQL queries from SQL Server to Spark, the driver pushes down supported SQL operations, like filters and aggregations, directly to Spark and utilizes the embedded SQL engine to process unsupported operations (often SQL functions and JOIN operations) client-side. And with PolyBase, you can also join SQL Server data with Spark data, using a single query to pull data from distributed sources.

Spark への接続

If you have not already, first specify connection properties in an ODBC DSN (data source name). This is the last step of the driver installation. You can use the Microsoft ODBC Data Source Administrator to create and configure ODBC DSNs. To create an external data source in SQL Server using PolyBase, configure a System DSN (CData Spark Sys is created automatically).

Set the Server, Database, User, and Password connection properties to connect to SparkSQL.

Click "Test Connection" to ensure that the DSN is connected to Spark properly. Navigate to the Tables tab to review the table definitions for Spark.

Spark Data の外部データソースの作成

After configuring the connection, you need to create a master encryption key and a credential database for the external data source.

Creating a Master Encryption Key

Execute the following SQL command to create a new master key, 'ENCRYPTION,' to encrypt the credentials for the external data source.

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password';

Creating a Credential Database

Execute the following SQL command to create credentials for the external data source connected to Spark data.

NOTE: Since Spark does not require a User or Password to authenticate, you may use whatever values you wish for IDENTITY and SECRET.

CREATE DATABASE SCOPED CREDENTIAL sparksql_creds
WITH IDENTITY = 'username', SECRET = 'password';

Create an External Data Source for Spark

Execute the following SQL command to create an external data source for Spark with PolyBase, using the DSN and credentials configured earlier.

PUSHDOWN is set to ON by default, meaning the ODBC Driver can leverage server-side processing for complex queries.

CREATE EXTERNAL DATA SOURCE cdata_sparksql_source
WITH ( 
  LOCATION = 'odbc://SERVERNAME[:PORT]',
  CONNECTION_OPTIONS = 'DSN=CData Spark Sys',
  -- PUSHDOWN = ON | OFF,
  CREDENTIAL = sparksql_creds
);

Spark のExternal Table を作成

After creating the external data source, use CREATE EXTERNAL TABLE statements to link to Spark data from your SQL Server instance. The table column definitions must match those exposed by the CData ODBC Driver for Spark. You can refer to the Tables tab of the DSN Configuration Wizard to see the table definition.

Sample CREATE TABLE Statement

The statement to create an external table based on a Spark Customers would look similar to the following:

CREATE EXTERNAL TABLE Customers(
  City [nvarchar](255) NULL,
  Balance [nvarchar](255) NULL,
  ...
) WITH ( 
  LOCATION='Customers',
  DATA_SOURCE=cdata_sparksql_source
);

Having created external tables for Spark in your SQL Server instance, you are now able to query local and remote data simultaneously. Thanks to built-in query processing in the CData ODBC Driver, you know that as much query processing as possible is being pushed to Spark, freeing up local resources and computing power. Download a free, 30-day trial of the ODBC Driver for Spark and start working with live Spark data alongside your SQL Server data today.

 
 
ダウンロード