Ready to get started?

Learn more about the CData JDBC Driver for Apache Spark or download a free trial:

Download Now

RapidMiner からSpark Data にデータ連携

RapidMiner Studio の標準コンポーネントおよびデータソース設定ウィザードでSpark data を連携します。

この記事では、CData JDBC driver for Spark をRapidMiner のプロセスに簡単に統合する方法を説明します。ここでは、Spark にCData JDBC ドライバーを使用して、Spark data をRapidMiner のプロセスに転送します。

JDBC データソースとしてRapidMiner のSpark に接続

You can follow the procedure below to establish a JDBC connection to Spark:

  1. Add a new database driver for Spark:Click Connections -> Manage Database Drivers.
  2. In the resulting wizard, click the Add button and enter a name for the connection.
  3. Enter the prefix for the JDBC URL: jdbc:sparksql:
  4. Enter the path to the cdata.jdbc.sparksql.jar file, located in the lib subfolder of the installation directory.
  5. Enter the driver class: cdata.jdbc.sparksql.SparkSQLDriver
  6. Create a new Spark connection:Click Connections -> Manage Database Connections.
  7. Enter a name for your connection.
  8. For Database System, select the Spark driver you configured previously.
  9. Enter your connection string in the Host box.

    Set the Server, Database, User, and Password connection properties to connect to SparkSQL.

    ビルトイン接続文字列デザイナー

    For assistance in constructing the JDBC URL, use the connection string designer built into the Spark JDBC Driver.Either double-click the JAR file or execute the jar file from the command-line.

    java -jar cdata.jdbc.sparksql.jar

    Fill in the connection properties and copy the connection string to the clipboard.

    A typical connection string is below:

    Server=127.0.0.1;
  10. Enter your username and password if necessary.

You can now use your Spark connection with the various RapidMiner operators in your process.To retrieve Spark data, drag the Retrieve operator from the Operators view. With the Retrieve operator selected, you can then define which table to retrieve in the Parameters view by clicking the folder icon next to the "repository entry."In the resulting Repository Browser, you can expand your connection node to select the desired example set.

Finally, wire the output to the Retrieve process to a result, and run the process to see the Spark data.

 
 
ダウンロード