Ready to get started?

Learn more about the CData JDBC Driver for SQL Analysis Services or download a free trial:

Download Now

AWS Glue Jobs からSQL Analysis Services Data にJDBC 経由でデータ連携

Amazon S3 でホストされているCData JDBC ドライバーを使用してAWS Glue ジョブからSQL Analysis Services にデータ連携。

AWS Glue はAmazon のETL サービスであり、これを使用すると、簡単にデータプレパレーションを行い、ストレージおよび分析用に読み込むことができます。AWS Glue と一緒にPySpark モジュールを使用すると、JDBC 接続を経由でデータを処理するジョブを作成し、そのデータをAWS データストアに直接読み込むことができます。ここでは、CData JDBC Driver for SQL Analysis Services をAmazon S3 バケットにアップロードし、SQL Analysis Services data からデータを抽出してCSV ファイルとしてS3 に保存するためのAWS Glue ジョブを作成して実行する方法について説明します。

CData JDBC driver for SQL Analysis Services をAmazon S3 バケットにアップロード

In order to work with the CData JDBC Driver for SQL Analysis Services in AWS Glue, you will need to store it (and any relevant license files) in a bucket in Amazon S3.

  1. Open the Amazon S3 Console.
  2. Select an existing bucket (or create a new one).
  3. Click Upload
  4. Select the JAR file (cdata.jdbc.ssas.jar) found in the lib directory in the installation location for the driver.

Amazon Glue Job を設定

  1. Navigate to ETL -> Jobs from the AWS Glue Console.
  2. Click Add Job to create a new Glue job.
  3. Fill in the Job properties:
    • Name: Fill in a name for the job, for example: SSASGlueJob.
    • IAM Role: Select (or create) an IAM role that has the AWSGlueServiceRole and AmazonS3FullAccess (because the JDBC Driver and destination are in an Amazon S3 bucket) permissions policies.
    • Type: Select "Spark."
    • This job runs: Select "A new script to be authored by you".
      Populate the script properties:
      • Script file name: A name for the script file, for example:GlueSSASJDBC
      • S3 path where the script is stored: Fill in or browse to an S3 bucket.
      • Temporary directory: Fill in or browse to an S3 bucket.
    • ETL language: Select "Python."
    • Expand Security configuration, script libraries and job parameters (optional).For Dependent jars path, fill in or browse to the S3 bucket where you loaded the JAR file.Be sure to include the name of the JAR file itself in the path, i.e.: s3://mybucket/cdata.jdbc.ssas.jar
  4. Click Next.Here you will have the option to add connection to other AWS endpoints, so if your Destination is Redshift, MySQL, etc, you can create and use connections to those data sources.
  5. Click "Save job and edit script" to create the job.
  6. In the editor that opens, write a python script for the job.You can use the sample script (see below) as an example.

サンプルGlue スクリプト

To connect to SQL Analysis Services using the CData JDBC driver, you will need to create a JDBC URL, populating the necessary connection properties.Additionally, (unless you are using a Beta driver), you will need to set the RTK property in the JDBC URL.You can view the licensing file included in the installation for information on how to set this property.

To connect, provide authentication and set the Url property to a valid SQL Server Analysis Services endpoint. You can connect to SQL Server Analysis Services instances hosted over HTTP with XMLA access. See the Microsoft documentation to configure HTTP access to SQL Server Analysis Services.

To secure connections and authenticate, set the corresponding connection properties, below. The data provider supports the major authentication schemes, including HTTP and Windows, as well as SSL/TLS.

  • HTTP Authentication

    Set AuthScheme to "Basic" or "Digest" and set User and Password. Specify other authentication values in CustomHeaders.

  • Windows (NTLM)

    Set the Windows User and Password and set AuthScheme to "NTLM".

  • Kerberos and Kerberos Delegation

    To authenticate with Kerberos, set AuthScheme to NEGOTIATE. To use Kerberos delegation, set AuthScheme to KERBEROSDELEGATION. If needed, provide the User, Password, and KerberosSPN. By default, the data provider attempts to communicate with the SPN at the specified Url.

  • SSL/TLS:

    By default, the data provider attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store. To specify another certificate, see the SSLServerCert property for the available formats.

You can then access any cube as a relational table: When you connect the data provider retrieves SSAS metadata and dynamically updates the table schemas. Instead of retrieving metadata every connection, you can set the CacheLocation property to automatically cache to a simple file-based store.

See the Getting Started section of the CData documentation, under Retrieving Analysis Services Data, to execute SQL-92 queries to the cubes.

ビルトイン接続文字列デザイナー

For assistance in constructing the JDBC URL, use the connection string designer built into the SQL Analysis Services JDBC Driver.Either double-click the JAR file or execute the jar file from the command-line.

java -jar cdata.jdbc.ssas.jar

Fill in the connection properties and copy the connection string to the clipboard.

To host the JDBC driver in Amazon S3, you will need a license (full or trial) and a Runtime Key (RTK).For more information on obtaining this license (or a trial), contact our sales team.

Below is a sample script that uses the CData JDBC driver with the PySpark and AWSGlue modules to extract SQL Analysis Services data and write it to an S3 bucket in CSV format.Make any changes to the script you need to suit your needs and save the job.

import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.dynamicframe import DynamicFrame from awsglue.job import Job args = getResolvedOptions(sys.argv, ['JOB_NAME']) sparkContext = SparkContext() glueContext = GlueContext(sparkContext) sparkSession = glueContext.spark_session ##Use the CData JDBC driver to read SQL Analysis Services data from the Adventure_Works table into a DataFrame ##Note the populated JDBC URL and driver class name source_df = sparkSession.read.format("jdbc").option("url","jdbc:ssas:RTK=5246...;User=myuseraccount;Password=mypassword;URL=http://localhost/OLAP/msmdpump.dll;").option("dbtable","Adventure_Works").option("driver","cdata.jdbc.ssas.SSASDriver").load() glueJob = Job(glueContext) glueJob.init(args['JOB_NAME'], args) ##Convert DataFrames to AWS Glue's DynamicFrames Object dynamic_dframe = DynamicFrame.fromDF(source_df, glueContext, "dynamic_df") ##Write the DynamicFrame as a file in CSV format to a folder in an S3 bucket. ##It is possible to write to any Amazon data store (SQL Server, Redshift, etc) by using any previously defined connections. retDatasink4 = glueContext.write_dynamic_frame.from_options(frame = dynamic_dframe, connection_type = "s3", connection_options = {"path": "s3://mybucket/outfiles"}, format = "csv", transformation_ctx = "datasink4") glueJob.commit()

Glueジョブを実行する

With the script written, we are ready to run the Glue job.Click Run Job and wait for the extract/load to complete.You can view the status of the job from the Jobs page in the AWS Glue Console.Once the Job has succeeded, you will have a csv file in your S3 bucket with data from the SQL Analysis Services Adventure_Works table.

Using the CData JDBC Driver for SQL Analysis Services in AWS Glue, you can easily create ETL jobs for SQL Analysis Services data, writing the data to an S3 bucket or loading it into any other AWS data store.

 
 
ダウンロード