Ready to get started?

Learn more about the CData JDBC Driver for REST or download a free trial:

Download Now

Apache Spark でREST Data をSQL で操作

CData JDBC ドライバーを使用して、Apache Spark でREST Data にデータ連携。

Apache Spark は大規模データ処理のための高速で一般的なエンジンです。CData JDBC Driver for REST と組み合わせると、Spark はリアルタイムREST data にデータ連携して処理ができます。ここでは、Spark シェルに接続してREST data をクエリする方法について説明します。

CData JDBC Driver は、最適化されたデータ処理がドライバーに組み込まれているため、リアルタイムREST data と対話するための高いパフォーマンスを提供します。REST に複雑なSQL クエリを発行すると、ドライバーはフィルタや集計など、サポートされているSQL操作を直接REST にプッシュし、組込みSQL エンジンを使用してサポートされていない操作(SQL 関数やJOIN 操作)をクライアント側で処理します。組み込みの動的メタデータクエリを使用すると、ネイティブデータ型を使用してREST data を操作して分析できます。

CData JDBC Driver for REST をインストール

CData JDBC Driver for REST インストーラをダウンロードし、パッケージを解凍し、JAR ファイルを実行してドライバーをインストールします。

Spark Shell を起動してREST Data に接続

  1. Open a terminal and start the Spark shell with the CData JDBC Driver for REST JAR file as the jars parameter: $ spark-shell --jars /CData/CData JDBC Driver for REST/lib/cdata.jdbc.rest.jar
  2. With the shell running, you can connect to REST with a JDBC URL and use the SQL Context load() function to read a table.

    See the Getting Started chapter in the data provider documentation to authenticate to your data source: The data provider models REST APIs as bidirectional database tables and XML/JSON files as read-only views (local files, files stored on popular cloud services, and FTP servers). The major authentication schemes are supported, including HTTP Basic, Digest, NTLM, OAuth, and FTP. See the Getting Started chapter in the data provider documentation for authentication guides.

    After setting the URI and providing any authentication values, set Format to "XML" or "JSON" and set DataModel to more closely match the data representation to the structure of your data.

    The DataModel property is the controlling property over how your data is represented into tables and toggles the following basic configurations.

    • Document (default): Model a top-level, document view of your REST data. The data provider returns nested elements as aggregates of data.
    • FlattenedDocuments: Implicitly join nested documents and their parents into a single table.
    • Relational: Return individual, related tables from hierarchical data. The tables contain a primary key and a foreign key that links to the parent document.

    See the Modeling REST Data chapter for more information on configuring the relational representation. You will also find the sample data used in the following examples. The data includes entries for people, the cars they own, and various maintenance services performed on those cars.

    組み込みの接続文字列デザイナー

    For assistance in constructing the JDBC URL, use the connection string designer built into the REST JDBC Driver.Either double-click the JAR file or execute the jar file from the command-line.

    java -jar cdata.jdbc.rest.jar

    Fill in the connection properties and copy the connection string to the clipboard.

    scala> val rest_df = spark.sqlContext.read.format("jdbc").option("url", "jdbc:rest:DataModel=Relational;URI=C:\people.xml;Format=XML;").option("dbtable","people").option("driver","cdata.jdbc.rest.RESTDriver").load()
  3. Once you connect and the data is loaded you will see the table schema displayed.
  4. Register the REST data as a temporary table:

    scala> rest_df.registerTable("people")
  5. Perform custom SQL queries against the Data using commands like the one below:

    scala> rest_df.sqlContext.sql("SELECT [ personal.name.first ], [ personal.name.last ] FROM people WHERE [ personal.name.last ] = Roberts").collect.foreach(println)

    You will see the results displayed in the console, similar to the following:

Using the CData JDBC Driver for REST in Apache Spark, you are able to perform fast and complex analytics on REST data, combining the power and utility of Spark with your data.Download a free, 30 day trial of any of the 200+ CData JDBC Drivers and get started today.

 
 
ダウンロード