Discover how a bimodal integration strategy can address the major data management challenges facing your organization today.
Get the Report →Operational Reporting on Snowflake Data from Spotfire Server
Create and share Spotfire data visualizations with real-time connectivity to remote Snowflake data.
You can gain real-time access to Snowflake data from your enterprise reporting solution with the CData JDBC Driver for Snowflake's drop-in install into your Java reporting server. Use the data source template in this article to expand your Spotfire library to include remote Snowflake data. You can then create and share real-time data visualizations that reflect any changes to Snowflake data.
About Snowflake Data Integration
CData simplifies access and integration of live Snowflake data. Our customers leverage CData connectivity to:
- Reads and write Snowflake data quickly and efficiently.
- Dynamically obtain metadata for the specified Warehouse, Database, and Schema.
- Authenticate in a variety of ways, including OAuth, OKTA, Azure AD, Azure Managed Service Identity, PingFederate, private key, and more.
Many CData users use CData solutions to access Snowflake from their preferred tools and applications, and replicate data from their disparate systems into Snowflake for comprehensive warehousing and analytics.
For more information on integrating Snowflake with CData solutions, refer to our blog: https://www.cdata.com/blog/snowflake-integrations.
Getting Started
Connect to Snowflake as a JDBC Data Source
To install the CData JDBC Driver for Snowflake on Spotfire Server, drop the driver JAR into the classpath and use the data source template in this section.
-
To add the driver to Spotfire Server's classpath, copy the driver JAR from the lib subfolder in the driver installation folder to the lib subfolder for your Spotfire Server installation: For example, MySpotfireServerHomeDirectory/tomcat/lib.
Note that the .lic file must be located in the same folder as the JAR.
- In the TIBCO Spotfire Server Configuration Tool, click the Configuration tab and select data source templates in the Configuration Start node.
- Create a new data source template with the following:
<jdbc-type-settings> <type-name>snowflake</type-name> <driver>cdata.jdbc.snowflake.SnowflakeDriver</driver> <connection-url-pattern>jdbc:snowflake:</connection-url-pattern> <ping-command>SELECT * FROM Projects LIMIT 1</ping-command> <connection-properties> <connection-property> <key>User</key> <value>Admin</value> </connection-property> <connection-property> <key>Password</key> <value>test123</value> </connection-property> <connection-property> <key>Server</key> <value>localhost</value> </connection-property> <connection-property> <key>Database</key> <value>Northwind</value> </connection-property> <connection-property> <key>Warehouse</key> <value>TestWarehouse</value> </connection-property> <connection-property> <key>Account</key> <value>Tester1</value> </connection-property> </connection-properties> <fetch-size>10000</fetch-size> <batch-size>100</batch-size> <max-column-name-length>32</max-column-name-length> <table-types>TABLE, VIEW</table-types> <supports-catalogs>true</supports-catalogs> <supports-schemas>true</supports-schemas> <supports-procedures>false</supports-procedures> <supports-distinct>true</supports-distinct> <supports-order-by>true</supports-order-by> <column-name-pattern>"$$name$$"</column-name-pattern> <table-name-pattern>"$$name$$"</table-name-pattern> <schema-name-pattern>"$$name$$"</schema-name-pattern> <catalog-name-pattern>"$$name$$"</catalog-name-pattern> <procedure-name-pattern>"$$name$$"</procedure-name-pattern> <column-alias-pattern>"$$name$$"</column-alias-pattern> <string-literal-quote>'</string-literal-quote> <max-in-clause-size>1000</max-in-clause-size> <condition-list-threshold>10000</condition-list-threshold> <expand-in-clause>false</expand-in-clause> <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern> <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern> <procedure-table-jdbc-type>0</procedure-table-jdbc-type> <procedure-table-type-name></procedure-table-type-name> <date-format-expression>$$value$$</date-format-expression> <date-literal-format-expression>'$$value$$'</date-literal-format-expression> <time-format-expression>$$value$$</time-format-expression> <time-literal-format-expression>'$$value$$'</time-literal-format-expression> <date-time-format-expression>$$value$$</date-time-format-expression> <date-time-literal-format-expression>'$$value$$'</date-time-literal-format-expression> <java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions> <temp-table-name-pattern>$$name$$#TEMP</temp-table-name-pattern> <create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command> <drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command> <data-source-authentication>false</data-source-authentication> <lob-threshold>-1</lob-threshold> <use-ansii-style-outer-join>false</use-ansii-style-outer-join> <credentials-timeout>86400</credentials-timeout> </jdbc-type-settings>
- Restart the Spotfire Server service.
The driver's support for standard SQL integrates real-time connectivity to Snowflake data into the familiar interfaces of the Spotfire Platform. To access Snowflake data from Spotfire Professional and other applications, including Jaspersoft Studio, create information links in the Information Designer.
As you select columns and filters, Spotfire Server builds the information link's underlying SQL query. Click Open Data to load the data into Spotfire.
Report authors can then build Snowflake visualizations based on Spotfire data tables without writing SQL queries by hand. Report viewers can rely on accurate and current Snowflake data.