<br><b>Code</b>: rsb:print<br><b>Error</b>: Formatter [ rootadoname | tolower() ] failed in the evaluation of <p> Apache Spark ???????????????????????????[company_name] JDBC Driver for [service] ?????????Spark ???????[datasource] ?????????????????????Spark ????????[datasource] ?????????????????? </p> <p> [company_name] JDBC Driver ?????????????????????????????????????[datasource] ????????????????????????[service] ????SQL ???????????????????????????????????SQL?????[service] ??????????SQL ?????????????????????(SQL ???JOIN ??)???????????????????????????????????????????????????[datasource] ???????????? </p> <h2>[company_name] JDBC Driver for [service] ???????</h2> <p> [company_name] JDBC Driver for [service] ?????????????????????????JAR ????????????????????????? </p> <h2>Spark Shell ?????[service] [datatype] ???</h2> <ol> <li>Open a terminal and start the Spark shell with the [company_name] JDBC Driver for [service] JAR file as the <var>jars</var> parameter: <code> $ spark-shell --jars /[company_name]/[company_name] JDBC Driver for [service]/lib/[company_name|tolower].jdbc.[rootadoname|tolower].jar </code> </li> <li>With the shell running, you can connect to [service] with a JDBC URL and use the SQL Context <var>load()</var> function to read a table. [extraconnectionnotesjdbc|def('[extraconnectionnotes|def("")]')] <h4>???????????????</h4> <p>For assistance in constructing the JDBC URL, use the connection string designer built into the [service] JDBC Driver.Either double-click the JAR file or execute the jar file from the command-line.</p> <code> java -jar cdata.jdbc.[rootadoname|tolower].jar </code> <p> Fill in the connection properties and copy the connection string to the clipboard. </p> <code> scala> val [rootadoname|tolower]_df = spark.sqlContext.read.format("jdbc").option("url", "jdbc:[rootadoname|tolower]:[extraconnectionprops]").option("dbtable","[extable]").option("driver","[company_name|tolower].jdbc.[rootadoname|tolower].[rootadoname]Driver").load() </code></li> <li>Once you connect and the data is loaded you will see the table schema displayed.</li> <li><p>Register the [datasource] as a temporary table:</p> <code>scala> [rootadoname|tolower]_df.registerTable("[extable|tolower]")</code> </li> <li> <p>Perform custom SQL queries against the [datatype] using commands like the one below:</p> <code>scala> [rootadoname|tolower]_df.sqlContext.sql("SELECT [excol#1], [excol#2] FROM [extable] WHERE [exselectwherecol] = [exselectwherecolequals]").collect.foreach(println)</code> <p>You will see the results displayed in the console, similar to the following:</p> <img src="[x|UrlRootAbs]/kb/articles/jdbc-apache-spark-1.png" title="[datatype] in Apache Spark (Salesforce is shown)" /> </li> </ol> <p> Using the [company_name] JDBC Driver for [service] in Apache Spark, you are able to perform fast and complex analytics on [datasource], combining the power and utility of Spark with your data.Download a <a href="../../jdbc">free, 30 day trial of any of the [x|db('v_drivers_count.total.count')]+ CData JDBC Drivers</a> and get started today. </p>. The error was: The attribute does not exist. This formatter cannot be called with nonexistent attributes.<br><b>URL</b>: /jp/kb/tech/adobeanalytics-jdbc-apache-spark.rst