Ready to get started?

Download a free trial of the Azure Data Lake Storage Driver to get started:

 Download Now

Learn more:

Azure Data Lake Storage Icon Azure Data Lake Storage JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Azure Data Lake Storage.

Operational Reporting on Azure Data Lake Storage Data from Spotfire Server



Create and share Spotfire data visualizations with real-time connectivity to remote Azure Data Lake Storage data.

You can gain real-time access to Azure Data Lake Storage data from your enterprise reporting solution with the CData JDBC Driver for Azure Data Lake Storage's drop-in install into your Java reporting server. Use the data source template in this article to expand your Spotfire library to include remote Azure Data Lake Storage data. You can then create and share real-time data visualizations that reflect any changes to Azure Data Lake Storage data.

Connect to Azure Data Lake Storage as a JDBC Data Source

To install the CData JDBC Driver for Azure Data Lake Storage on Spotfire Server, drop the driver JAR into the classpath and use the data source template in this section.

  1. To add the driver to Spotfire Server's classpath, copy the driver JAR from the lib subfolder in the driver installation folder to the lib subfolder for your Spotfire Server installation: For example, MySpotfireServerHomeDirectory/tomcat/lib.

    Note that the .lic file must be located in the same folder as the JAR.

  2. In the TIBCO Spotfire Server Configuration Tool, click the Configuration tab and select data source templates in the Configuration Start node.
  3. Create a new data source template with the following:

    
    <jdbc-type-settings>
      <type-name>adls</type-name>
      <driver>cdata.jdbc.adls.ADLSDriver</driver>
      <connection-url-pattern>jdbc:adls:</connection-url-pattern>
      <ping-command>SELECT * FROM Projects LIMIT 1</ping-command>
      <connection-properties>
        <connection-property>
          <key>Schema</key>
          <value>ADLSGen2</value>
        </connection-property>
        <connection-property>
          <key>Account</key>
          <value>myAccount</value>
        </connection-property>
        <connection-property>
          <key>FileSystem</key>
          <value>myFileSystem</value>
        </connection-property>
        <connection-property>
          <key>AccessKey</key>
          <value>myAccessKey</value>
        </connection-property>
        <connection-property>
          <key>InitiateOAuth</key>
          <value>GETANDREFRESH</value>
        </connection-property>
      </connection-properties>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <max-column-name-length>32</max-column-name-length>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
      <supports-distinct>true</supports-distinct>
      <supports-order-by>true</supports-order-by>
      <column-name-pattern>"$$name$$"</column-name-pattern>
      <table-name-pattern>"$$name$$"</table-name-pattern>
      <schema-name-pattern>"$$name$$"</schema-name-pattern>
      <catalog-name-pattern>"$$name$$"</catalog-name-pattern>
      <procedure-name-pattern>"$$name$$"</procedure-name-pattern>
      <column-alias-pattern>"$$name$$"</column-alias-pattern>
      <string-literal-quote>'</string-literal-quote>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
      <procedure-table-jdbc-type>0</procedure-table-jdbc-type>
      <procedure-table-type-name></procedure-table-type-name>
      <date-format-expression>$$value$$</date-format-expression>
      <date-literal-format-expression>'$$value$$'</date-literal-format-expression>
      <time-format-expression>$$value$$</time-format-expression> 
      <time-literal-format-expression>'$$value$$'</time-literal-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <date-time-literal-format-expression>'$$value$$'</date-time-literal-format-expression>
      <java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions>
      <temp-table-name-pattern>$$name$$#TEMP</temp-table-name-pattern>
      <create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command>
      <data-source-authentication>false</data-source-authentication>
      <lob-threshold>-1</lob-threshold>
      <use-ansii-style-outer-join>false</use-ansii-style-outer-join>
      <credentials-timeout>86400</credentials-timeout>
    </jdbc-type-settings>
    
  4. Restart the Spotfire Server service.

The driver's support for standard SQL integrates real-time connectivity to Azure Data Lake Storage data into the familiar interfaces of the Spotfire Platform. To access Azure Data Lake Storage data from Spotfire Professional and other applications, including Jaspersoft Studio, create information links in the Information Designer.

As you select columns and filters, Spotfire Server builds the information link's underlying SQL query. Click Open Data to load the data into Spotfire.

Report authors can then build Azure Data Lake Storage visualizations based on Spotfire data tables without writing SQL queries by hand. Report viewers can rely on accurate and current Azure Data Lake Storage data.