5 Expert Techniques for Optimizing Snowflake ODBC Performance

by Anusha MB | April 17, 2026

optimizing-snowflake-odbc.pngIf you have ever used ODBC connections with Snowflake, you have probably noticed that queries do not always run as fast as expected. In most cases, the connection layer is the real issue, not Snowflake. Minor driver settings and configuration gaps often slow things down, and they are easy to fix once identified. Many data users still face latency, rising costs, and inconsistent performance because speed depends not just on warehouse size, but on how data connects, queries, and runs.

Most of these issues can be solved at the driver level without accessing your Snowflake account or application code. CData ODBC Driver for Snowflake provides the settings needed to control how connections are managed, how queries are executed, and how data is fetched. This blog covers five techniques, each with specific configuration steps you can apply to improve query performance and reduce costs.

1. Optimize connection pooling for persistent sessions

Every time your application connects to Snowflake, it goes through authentication, session setup, and handshake protocols. When this happens repeatedly across multiple users, the overall delay directly impacts both performance and costs. Connection pooling solves this by maintaining a cache of open database sessions that multiple client threads can reuse. Instead of opening a new connection each time, your application picks one from the pool, uses it, and returns it. This can reduce session-related latency.

Driver-level pooling

The first step is to enable pooling. Connection pooling in the CData ODBC driver is configured directly in the DSN, which means it applies automatically to every connection without any changes to your application.

Feature

CData ODBC driver

Snowflake native ODBC driver

Built-in pooling

Configured at DSN level

Relies on OS-level ODBC Driver Manager

Pool enable setting

UsePool=1

Not available natively

Idle connection timeout

Timeout=60

Not configurable at driver level

Keep-alive control

Set CLIENT_SESSION_KEEP_ALIVE=true in DSN

Requires application-level handling

Platform support

Windows, macOS, Linux

Windows primarily

Once these settings are made, the driver handles session reuse automatically across all requests.

Keep-alive configuration

Pooling alone is not enough if idle sessions keep getting dropped. Adding CLIENT_SESSION_KEEP_ALIVE=true to the DSN string tells Snowflake to hold sessions open during inactivity. The CData ODBC Driver supports this directly in the DSN configuration along with its SessionIdleTimeout property. With the native Snowflake ODBC driver, this parameter requires application-level handling or must be set via ALTER SESSION on the Snowflake side.

Pool sizing

Having the right number of connections in the pool is just as important as enabling it. A simple rule to follow is to set the pool size at about two times your average concurrent queries. For example, if 10 queries typically run at the same time, a pool of 20 gives enough room to handle the load. If the pool grows beyond what your warehouse can handle, queries start queuing, which ties into the warehouse sizing discussed in a further section.

2. Push the work down: enable query folding, pushdown, and passthrough

Query folding is the process by which a BI tool translates its transformations into native Snowflake SQL. When it works, Snowflake handles the processing and returns only the results. When it breaks, the BI tool pulls raw data and processes everything locally, leading to slower performance and higher memory usage.

Verify folding

Before optimizing, check whether folding is working in your setup. Here is how to verify it:

  • Power BI: Right-click any step in Power Query and look for "View Native Query." If available, that step is folding. If greyed out, folding has broken.

  • Tableau: Check the performance log for the SFD_QF_SUCCESS if it is missing, transformations are running locally instead of on Snowflake.

Once you have confirmed where folding breaks, the next step is to make sure pushdown is enabled at the driver level.

Query pushdown and passthrough settings

With the native Snowflake ODBC driver, pushdown depends entirely on the BI tool. The driver has no setting to control what runs on Snowflake versus what gets processed locally. The CData ODBC Driver includes a built-in query engine that actively pushes supported operations like filters, aggregations, and projections directly to Snowflake. If something cannot be pushed down, the driver handles it client-side on its own. You don't have to rely on the BI tool to get this right.

When you need full control, set QueryPassthrough=true. This sends your SQL directly to Snowflake with no driver-side changes, which is helpful for Snowflake-specific syntax like time travel or semi-structured projections.

One common reason pushdown breaks is data type mismatches. If your query casts a column unnecessarily, the driver pulls the data locally instead of letting Snowflake handle it.

-- Good: matches Snowflake native types
SELECT order_id, order_date FROM orders WHERE region = 'US-East'

-- Bad: unnecessary CAST forces client-side processing
SELECT order_id, CAST(order_date AS VARCHAR) FROM orders WHERE region = 'US-East'

3. Reuse query plans with prepared statements and parameter binding

Once your queries are folding correctly and pushdown is working at the driver’s level, the next thing to look at is how often Snowflake is compiling the same query. Applications that run the same query repeatedly with different values end up parsing, validating, and building an execution plan every single time. A prepared statement solves this by defining the query structure. Snowflake compiles it once and reuses it with different parameters, skipping the parsing step on every run

Bound parameters

Instead of compiling the same query every time, bind your values to a prepared statement. Snowflake recognizes the structure, skips recompilation, and executes directly.

ODBC API implementation

The approach varies slightly by language, but the core idea is the same. Define the query once, bind the values, and execute.

  • C/C++: Use SQLPrepare to define the query, SQLBindParameter to attach values, and SQLExecute to run it. For bulk inserts, set SQL_ATTR_PARAMSET_SIZE to batch multiple rows in a single call

  • Python (pyodbc): Pass parameters directly in the execute call
    pythoncursor+6.execute("SELECT * FROM orders WHERE region = ?", "US-East")

  • .NET: Define your query in OdbcCommand, use ? for parameter values, and bind them using Parameters.AddWithValue

Avoid dynamic SQL

Building SQL strings through concatenation means Snowflake treats every variation as a new query, which leads to no plan reuse, higher compilation cost and exposure to SQL injection. Recommended to use parameterised queries as they are both faster and more secure.

4. Right-size warehouses and align driver settings to reduce costs

Your queries cost less if the warehouse running them is the right fit for your workload. While warehouse sizing is configured on the Snowflake side, the driver you use determines how much work that warehouse actually has to do.

Why driver choice affects warehouse sizing

When the CData ODBC Driver pushes filters, aggregations, and projections directly to Snowflake, the warehouse processes less data per query. Combined with QueryPassthrough=true, your queries run leaner, so a smaller warehouse can often handle the same workload. With the native Snowflake ODBC driver, pushdown depends on the BI tool, which may pull more data than needed and push you toward a larger, more expensive warehouse.

Start with an X-Small or Small, test your typical ODBC queries, and scale up only when you notice queuing or slower execution. Snowflake credits double with every size increase: X-Small costs 1 credit per hour, Small costs 2, Medium costs 4, and Large costs 8.

Auto-suspend with driver-level session handling

By default, Snowflake keeps a warehouse running for 10 minutes after the last query. Set AUTO_SUSPEND=60 and AUTO_RESUME=TRUE so it shuts down after 60 seconds of inactivity and restarts automatically. The short delay on resume can cause issues if your driver opens a fresh connection each time. The CData driver's built-in pooling and CLIENT_SESSION_KEEP_ALIVE=true from Section 1 keep sessions ready, so your application does not stall. The CData driver's Timeout property (default 60 seconds) also gives enough buffer for warehouse resume time.

Credit monitoring

To track credit usage, query SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY:

SELECT warehouse_name,
SUM(credits_used) AS total_credits FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP()) GROUP BY warehouse_name ORDER BY total_credits DESC;

For always-on visibility, set up a dashboard in CData Connect AI to monitor usage across all your warehouses.

5.Minimize network latency with batching and diagnostics

Even with the right warehouse size and optimized queries, network distance can still slow things down. If your application server and Snowflake account sit in different cloud regions, every query travels that distance. Moving your application to the same region can improve performance by 30% to 50%. For production workloads with compliance needs, Snowflake PrivateLink routes traffic through the cloud provider's private network instead of the public internet.

At the driver level, set PREFETCH_ROWS to pull multiple rows per round-trip instead of one at a time. Use around 10,000 for analytics workloads and 1,000 for transactional queries. This cuts down round-trips without overloading memory. If queries still feel slow after all of this, check the network layer. Use SnowCD to test connectivity to Snowflake endpoints. For deeper analysis, use Wireshark:

  1. Filter for your Snowflake host: ip.addr ==

  2. Filter for TCP handshakes: tcp.flags.syn == 1 && tcp.flags.ack == 0

  3. Count the results; each entry is a new connection being opened

A high count means connections are not being reused, which points back to your pooling configuration in Step 1. When applied together, these five techniques make your Snowflake ODBC connections faster, cheaper, and easier to manage.

Frequently asked questions

How do I keep a Snowflake ODBC session alive without re-authenticating?

Enable CLIENT_SESSION_KEEP_ALIVE=true in the DSN. The CData ODBC Driver supports this directly in the DSN along with its SessionIdleTimeout property. The native Snowflake driver requires ALTER SESSION or application-level handling.

Does the 32-bit vs 64-bit driver architecture affect performance?

Yes, use the 64-bit driver for modern BI tools like Power BI to avoid architecture mismatches that cause authentication failures and slower memory paging.

What is the difference between QueryPassthrough and the driver's built-in pushdown?

Pushdown lets the CData driver decide what runs on Snowflake versus client-side. QueryPassthrough=true sends your SQL to Snowflake exactly as written. Use pushdown for BI tool queries and passthrough for Snowflake-specific syntax like time travel.

How can I reduce round-trips when fetching large result sets?

Set PREFETCH_ROWS to pull multiple rows per round-trip like around 10,000 for analytics and 1,000 for transactional queries. Pair this with connection pooling to reuse sessions instead of opening fresh connections.

Start optimizing your Snowflake ODBC performance using CData

CData ODBC Driver for Snowflake gives you fast and simpler connectivity with built-in support for connection pooling, query pushdown, prepared statements, and more. Connect to over 350 data sources through a unified driver. Start your free trial today!

Explore CData Drivers and Connectors

Get blazing-fast access to live data with seamless connectivity between your data sources and the tools you use every day. Our standards-based connectors make data integration effortless – just point, click, and go.

Try Them Now