Configuring and Using the JDBC Driver

In this Topic:

JDBC Driver Class

Use net.snowflake.client.jdbc.SnowflakeDriver as the driver class in your JDBC application.

Note

  • Don’t reference any other Snowflake classes or methods in your application code as they are subject to change in the future to implement improvements and fixes.

  • The previous driver class, com.snowflake.client.jdbc.SnowflakeDriver, is still supported but is deprecated (i.e. it will be removed in a future release, TBD). As such, any code that references the previous class name will continue to work, but you should update the code to reference the new class name now that the change has been implemented.

JDBC Driver Connection String

Using the JDBC driver to connect to Snowflake requires a connection string with the following syntax.

Syntax

jdbc:snowflake://<account_name>.snowflakecomputing.com/?<connection_params>

Connection Parameters

account_name

Specifies the full name of your account (provided by Snowflake). Depending on the cloud platform (AWS or Azure) and Snowflake Region where your account is hosted, the full account name might require additional segments:

Account name details

Structure of Snowflake account hostnames

For example, if your account name is xy12345:

Snowflake Region

Full Account Name

AWS

US West

xy12345

US East

xy12345.us-east-1

Canada

xy12345.ca-central-1

EU (Dublin)

xy12345.eu-west-1

EU (Frankfurt)

xy12345.eu-central-1

Asia Pacific (Singapore)

xy12345.ap-southeast-1

Asia Pacific (Sydney)

xy12345.ap-southeast-2

Azure

East US 2

xy12345.east-us-2.azure

US Gov Virginia

xy12345.us-gov-virginia.azure

West Europe

xy12345.west-europe.azure

Australia East

xy12345.australia-east.azure

Important

If either of the following conditions is true, your account name is different than the structure described above:

  • If your Snowflake Edition is VPS, please contact Snowflake Support for your account name details.

  • If AWS PrivateLink is enabled for your account, your account name requires an additional privatelink segment. For more details, see AWS PrivateLink & Snowflake.

For more details about regions and platforms, see Snowflake Regions and Cloud Platforms.

region_idDeprecated

Specifies the ID for the Snowflake Region where your account is located.

This parameter is no longer uses because the region information, if required, is included as part of the full account name. It is documented here only for backward compatibility.

connection_params

Specifies a series of one or more parameters, in the form of <param>=<value>, with each parameter separated by the ampersand character &, and no spaces anywhere in the connection string:

user=<login_name>

Specifies the login name of the user for the connection.

password=<string>

Specifies the password for the specified user.

There are two ways to specify the password.

The first is to pass the user ID and password directly to the getConnection method:

String user = "<user>";          // replace "<user>" with your user name
String password = "<password>";  // replace "<password>" with your password
Connection con = DriverManager.getConnection("jdbc:snowflake://<account>.snowflakecomputing.com/", user, password);

The second is to create a Properties object, update the Properties object with the password, and pass the Properties object to the getConnection method:

String user = "<user>";          // replace "<user>" with your user name
String password = "<password>";  // replace "<password>" with your password
Properties props = new Properties();
properties.put("user", user);
properties.put("password", password);
Connection con = DriverManager.getConnection("jdbc:snowflake://<account>.snowflakecomputing.com/", props);

Attention

We strongly recommend that you do not include the user password in the JDBC connection string because the password could be inadvertently exposed by the client application that uses the string to connect to Snowflake. Instead, use the interface(s) provided by the application to specify the user password.

authenticator=<string>

Specifies the authenticator to use for verifying user login credentials:

  • snowflake (Default) to use the internal Snowflake authenticator.

  • externalbrowser to authenticate using your web browser and Okta, ADFS, or any other SAML 2.0-compliant identify provider (IdP) that has been defined for your account.

  • https://<your_okta_account_name>.okta.com (i.e. the URL endpoint for Okta) to authenticate through native Okta (only supported if your IdP is Okta).

Default is snowflake.

Note

The externalbrowser authenticator is only supported in terminal windows that have web browser access. For example, a terminal window on a remote machine accessed through a SSH (Secure Shell) session might require additional setup to open a web browser.

If you don’t have access to a web browser, but your IdP is Okta, you can use native Okta (i.e. set the authenticator to https://<your_okta_account_name>.okta.com).

For more information, see Managing/Using Federated Authentication.

role=<name>

Specifies the default access control role to use in the Snowflake session initiated by the driver. The specified role should be an existing role that has already been assigned to the specified user for the driver. If the specified role has not already been assigned to the user, the role is not used when the session is initiated by the driver.

After connecting, the USE ROLE command can be executed to set a different role for the session.

For more information about roles and access control, see Access Control in Snowflake.

db=<name>

Specifies the default database to use once connected, or nodb. The specified database should be an existing database for which the specified default role has privileges.

After connecting, the USE DATABASE command can be executed to set a different database for the session.

schema=<name>

Specifies the default schema to use for the specified database once connected, or noschema. The specified schema should be an existing schema for which the specified default role has privileges.

After connecting, the USE SCHEMA command can be executed to set a different schema for the session.

warehouse=<name>

Specifies the virtual warehouse to use once connected, or nowarehouse. The specified warehouse should be an existing warehouse for which the specified default role has privileges.

After connecting, the USE WAREHOUSE command can be executed to set a different warehouse for the session.

tracing=<string>

Specifies the log level for the driver. The driver uses the standard Java log utility. Valid values for log level are:

OFF , SEVERE , WARNING , INFO , CONFIG , FINE , FINER , FINEST , ALL.

Default value is INFO.

passcode=<string>

Specifies the passcode to use for multi-factor authentication.

For more information about multi-factor authentication, see Multi-Factor Authentication (MFA).

passcodeInPassword=<string>

Specifies whether the passcode for multi-factor authentication is appended to the password:

  • on (or true) specifies the passcode is appended.

  • off (or false) or any other value specifies the passcode is not appended.

Default is off.

loginTimeout

Specifies how long to wait for a response when connecting to the Snowflake service before returning a login failure error.

Default is 60 seconds.

networkTimeout

Specifies how long to wait for a response when interacting with the Snowflake service before returning an error. Zero (0) indicates no network timeout is set.

Default is 0 seconds.

queryTimeout=<number>

Specifies how long to wait for a query to complete before returning an error. Zero (0) indicates to wait indefinitely.

Default is 0 seconds.

application=<string>

Snowflake partner use only: Specifies the name of a partner application to connect through JDBC.

disableSocksProxy=<string>

Specifies whether the driver should ignore the SOCKS proxy configuration specified in the Java system options (if any):

  • on (or true) specifies to ignore the proxy.

  • off (or false) or any other value specifies to use the proxy.

Default is off.

Note: Setting this connection parameter will alter the behavior for all connections on the same JVM (Java virtual machine).

Other Parameters

Any session parameter can be included in the connection string. For example:

CLIENT_SESSION_KEEP_ALIVE=<Boolean>

Specifies whether to keep the current session active after a period of inactivity, or to force the user to login again. If the value is true, Snowflake keeps the session active indefinitely, even if there is no activity from the user. If the value is false, the user must log in again after four hours of inactivity.

Default is false.

For descriptions of all the session parameters, see Parameters.

Examples

Account in US West:

jdbc:snowflake://xy12345.snowflakecomputing.com/?user=peter&warehouse=mywh&db=mydb&schema=public

Account in EU (Frankfurt):

jdbc:snowflake://xy12345.eu-central-1.snowflakecomputing.com/?user=peter&warehouse=mywh&db=mydb&schema=public

Using Key Pair Authentication

Snowflake supports using key pair authentication rather than the typical username/password authentication. This authentication method requires a 2048-bit (minimum) RSA key pair. Generate the public-private key pair using OpenSSL. The public key is assigned to the Snowflake user who will use the Snowflake client.

To configure the public/private key pair:

  1. From the command line in a terminal window, generate a private key.

    You can generate either an encrypted version of the private key or an unencrypted version of the private key.

    To generate an unencrypted version, use the following command:

    $ openssl genrsa -out rsa_key.pem 2048
    

    To generate an encrypted version, use the following command:

    $ openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8
    

    It is typically safer to generate an encrypted version.

    If you use the second command to encrypt the private key, then OpenSSL prompts for a passphrase used to encrypt the private key file. We recommend using a strong passphrase to protect the private key. Record this passphrase in a secure location. You will input it when connecting to Snowflake. Note that the passphrase is only used for protecting the private key and will never be sent to Snowflake.

    Sample PEM private key

    -----BEGIN ENCRYPTED PRIVATE KEY-----
    MIIE6TAbBgkqhkiG9w0BBQMwDgQILYPyCppzOwECAggABIIEyLiGSpeeGSe3xHP1
    wHLjfCYycUPennlX2bd8yX8xOxGSGfvB+99+PmSlex0FmY9ov1J8H1H9Y3lMWXbL
    ...
    -----END ENCRYPTED PRIVATE KEY-----
    
  2. From the command line, generate the public key by referencing the private key:

    Assuming the private key is encrypted and contained in the file named “rsa_key.p8”, use the following command:

    $ openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
    

    Sample PEM public key

    -----BEGIN PUBLIC KEY-----
    MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAy+Fw2qv4Roud3l6tjPH4
    zxybHjmZ5rhtCz9jppCV8UTWvEXxa88IGRIHbJ/PwKW/mR8LXdfI7l/9vCMXX4mk
    ...
    -----END PUBLIC KEY-----
    
  3. Copy the public and private key files to a local directory for storage. Record the path to the files. Note that the private key is stored using the PKCS#8 (Public Key Cryptography Standards) format and is encrypted using the passphrase you specified in the previous step; however, the file should still be protected from unauthorized access using the file permission mechanism provided by your operating system. It is your responsibility to secure the file when it is not being used.

  4. Assign the public key to the Snowflake user using ALTER USER. For example:

    ALTER USER jsmith SET RSA_PUBLIC_KEY='MIIBIjANBgkqh...';
    

    Note

    • Only security administrators (i.e. users with the SECURITYADMIN role) or higher can alter a user.

    • Exclude the public key header and footer in the SQL statement.

    Verify the user’s public key fingerprint using DESCRIBE USER:

    DESC USER jsmith;
    +-------------------------------+-----------------------------------------------------+---------+-------------------------------------------------------------------------------+
    | property                      | value                                               | default | description                                                                   |
    |-------------------------------+-----------------------------------------------------+---------+-------------------------------------------------------------------------------|
    | NAME                          | JSMITH                                              | null    | Name                                                                          |
    ...
    ...
    | RSA_PUBLIC_KEY_FP             | SHA256:nvnONUsfiuycCLMXIEWG4eTp4FjhVUZQUQbNpbSHXiA= | null    | Fingerprint of user's RSA public key.                                         |
    | RSA_PUBLIC_KEY_2_FP           | null                                                | null    | Fingerprint of user's second RSA public key.                                  |
    +-------------------------------+-----------------------------------------------------+---------+-------------------------------------------------------------------------------+
    

    Note

    The RSA_PUBLIC_KEY_2_FP property is described in Key Rotation (in this topic).

  5. Modify and execute the sample code, below. The code decrypts the private key file and passes it to the Snowflake driver to create a connection:

  • Update the security parameters:

    • path: Specifies the local path to the private key file you created.

  • Update the session parameters:

    • user: Specifies your Snowflake login name.

    • account: Specifies the name of your account (provided by Snowflake).

Sample code

import java.util.Properties;
import java.sql.Connection;
import java.sql.Statement;
import java.sql.ResultSet;
import java.sql.DriverManager;
import java.io.File;
import java.io.FileInputStream;
import java.io.DataInputStream;
import java.util.Base64;
import java.security.spec.PKCS8EncodedKeySpec;
import java.security.KeyFactory;
import java.security.PrivateKey;
import javax.crypto.EncryptedPrivateKeyInfo;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.PBEKeySpec;

public class TestJdbc
{
    public static void main(String[] args)
        throws Exception
    {
      File f = new File("<path>/rsa_key.p8");
      FileInputStream fis = new FileInputStream(f);
      DataInputStream dis = new DataInputStream(fis);
      byte[] keyBytes = new byte[(int) f.length()];
      dis.readFully(keyBytes);
      dis.close();

      String encrypted = new String(keyBytes);
      String passphrase = System.getenv("PRIVATE_KEY_PASSPHRASE");
      encrypted = encrypted.replace("-----BEGIN ENCRYPTED PRIVATE KEY-----", "");
      encrypted = encrypted.replace("-----END ENCRYPTED PRIVATE KEY-----", "");
      EncryptedPrivateKeyInfo pkInfo = new EncryptedPrivateKeyInfo(Base64.getMimeDecoder().decode(encrypted));
      PBEKeySpec keySpec = new PBEKeySpec(passphrase.toCharArray());
      SecretKeyFactory pbeKeyFactory = SecretKeyFactory.getInstance(pkInfo.getAlgName());
      PKCS8EncodedKeySpec encodedKeySpec = pkInfo.getKeySpec(pbeKeyFactory.generateSecret(keySpec));
      KeyFactory keyFactory = KeyFactory.getInstance("RSA");
      PrivateKey encryptedPrivateKey = keyFactory.generatePrivate(encodedKeySpec);

      Class.forName("net.snowflake.client.jdbc.SnowflakeDriver");
      String url = "jdbc:snowflake://<account>.snowflakecomputing.com";
      Properties prop = new Properties();
      prop.put("user", "<user>");
      prop.put("account", "<account>");
      prop.put("privateKey", encryptedPrivateKey);

      Connection conn = DriverManager.getConnection(url, prop);
      Statement stat = conn.createStatement();
      ResultSet res = stat.executeQuery("select 1");
      res.next();
      System.out.println(res.getString(1));
      conn.close();
    }
}

Key Rotation

Snowflake supports multiple active keys to allow for uninterrupted rotation. Rotate and replace your public and private keys based on the expiration schedule you follow internally.

Currently, you can use the RSA_PUBLIC_KEY and RSA_PUBLIC_KEY_2 parameters for ALTER USER to associate up to 2 public keys with a single user.

To rotate your keys:

  1. Complete the steps in Using Key Pair Authentication to:

    • Generate a new private and public key set.

    • Assign the public key to the user. Set the public key value to either RSA_PUBLIC_KEY or RSA_PUBLIC_KEY_2 (whichever key value is not currently in use). For example:

      alter user jsmith set rsa_public_key_2='JERUEHtcve...';
      
  2. Update the code to connect to Snowflake. Specify the new private key.

    Snowflake verifies the correct active public key for authentication based on the private key submitted with your connection information.

  3. Remove the old public key from the user profile. For example:

    alter user jsmith unset rsa_public_key;
    

Connecting Using a Proxy Server

There are two ways to use a proxy server with the Snowflake JDBC Driver:

  • Add parameters to your client application’s JVM (Java Virtual Machine) options.

  • Include the proxy host and port information in the JDBC connection string or the properties passed to the DriverManager.getConnection() method.

Both techniques are documented below.

Specifying a Proxy Server by Setting JVM Options

To connect through a proxy server, you can set options inside your code or by passing command-line parameters to your client application JVM (Java virtual machine). The two techniques are equivalent.

To specify the options inside your code, do the following:

System.setProperty("http.useProxy", "true");
System.setProperty("http.proxyHost", "proxyHost Value");
System.setProperty("http.proxyPort", "proxyPort Valure");
System.setProperty("https.proxyHost", "proxyHost HTTPS Value");
System.setProperty("https.proxyPort", ""proxyPort HTTPS Value"")

To specify the options via command-line parameters to your JVM, do the following:

-Dhttp.useProxy=true
-Dhttps.proxyHost=<proxy_host>
-Dhttp.proxyHost=<proxy_host>
-Dhttps.proxyPort=<proxy_port>
-Dhttp.proxyPort=<proxy_port>

To bypass the proxy for one or more IP addresses or URLs, add these locations to your JVM options:

-Dhttp.nonProxyHosts="*.foo.com|localhost"

Specifying a Proxy Server in the JDBC Connection String

You can specify a proxy server’s IP address and port number directly in the JDBC connection string:

jdbc:snowflake://<account>.<region_id>.snowflakecomputing.com/?warehouse=<warehouse_name> &useProxy=true&proxyHost=ip-172-31-89-76.ec2.internal&proxyPort=8888&proxyUser=test&proxyPassword=test

For example:

jdbc:snowflake://xy12345.us-east-1.snowflakecomputing.com/?warehouse=DemoWarehouse1 &useProxy=true&proxyHost=ip-172-31-89-76.ec2.internal&proxyPort=8888&proxyUser=test&proxyPassword=test

If your proxy server does not require authentication, you can omit the proxyUser and proxyPassword information.

Note

The IP address must use the dash character (“-“) where an IP address usually uses a period character (“.”).

Note

The examples above contain blank space for readability; if you copy and paste either of these examples, please remove the extra blank space.

Note

Specifying the proxy information as part of the URL is less secure than other methods of specifying the proxy information.

Tip

Snowflake’s security model does not allow Secure Sockets Layer (SSL) proxies (using an HTTPS certificate). Your proxy server must use a publicly-available Certificate Authority (CA), reducing potential security risks such as a MITM (Man In The Middle) attack through a compromised proxy.

If you must use your SSL proxy, we strongly recommend that you update the server policy to pass through the Snowflake certificate such that no certificate is altered in the middle of communications.

Optionally NO_PROXY can be used to bypass the proxy for specific communications. For example, AWS S3 access can be bypassed by specifying NO_PROXY=".amazonaws.com".

OCSP

When the driver connects, Snowflake sends a certificate to confirm that the connection is to Snowflake rather than to a host that is impersonating Snowflake. The driver sends that certificate to an OCSP (Online Certificate Status Protocol) server to verify that the certificate has not been revoked.

If the driver cannot reach the OCSP server to verify the certificate, the driver can “fail open” or “fail closed”.

Choosing Fail-Open or Fail-Close Mode

JDBC Driver versions prior to 3.8.0 default to fail-close. Versions 3.8.0 and later default to fail-open. You can override the default behavior in any of the following ways:

  • Set the connection property ocspFailOpen to true or false. For example: . Properties connection_properties = new Properties(); . connection_properties.put("ocspFailOpen", "false"); . ... . connection = DriverManager.getConnection(connectionString, properties);

  • Set the system property net.snowflake.jdbc.ocspFailOpen to true or false. For example: . Properties p = new Properties(System.getProperties()); . p.put("net.snowflake.jdbc.ocspFailOpen", "false"); . System.setProperties(p);

Verifying the OCSP Connector or Driver Version

For more information about the driver or connector version, configuration, and OCSP behavior, see OCSP Client & Driver Configuration.

OCSP Response Cache Server

Note

The OCSP response cache server is currently supported by the Snowflake JDBC Driver 3.6.0 and higher.

Snowflake clients initiate every connection to a Snowflake service endpoint with a “handshake” that establishes a secure connection before actually transferring data. As part of the handshake, a client authenticates the TLS/SSL certificate for the service endpoint. The revocation status of the certificate is checked by sending a client certificate request to one of the OCSP (Online Certificate Status Protocol) servers for the CA (certificate authority).

A connection failure occurs when the response from the OCSP server is delayed beyond a reasonable time. The following caches persist the revocation status, helping alleviate these issues:

  • Memory cache, which persists for the life of the process.

  • File cache, which persists until the cache directory (e.g. ~/.cache/snowflake or ~/.snowsql/ocsp_response_cache) is purged.

  • Snowflake OCSP response cache server, which fetches OCSP responses from the CA’s OCSP servers hourly and stores them for 24 hours. Clients can then request the validation status of a given Snowflake certificate from this server cache.

    Important

    If your server policy denies access to most or all external IP addresses and web sites, you must whitelist the cache server address to allow normal service operation. The cache server hostname is ocsp*.snowflakecomputing.com:80.

    If you need to disable the cache server for any reason, set the SF_OCSP_RESPONSE_CACHE_SERVER_ENABLED environment variable to false. Note that the value is case-sensitive and must be in lowercase.

If none of the cache layers contain the OCSP response, the client then attempts to fetch the validation status directly from the OCSP server for the CA.

File Caches

To improve usability, the driver uses file caches for authentication and OCSP responses. By default, these files are stored in the following directories:

Linux

~/.cache/snowflake

macOS

~/Library/Caches/Snowflake

Windows

%USERPROFILE%AppDataLocalSnowflakeCaches

If the JDBC application user does not have a user profile in the local operating system, the driver attempts to store the cache files in the temporary directory. You can configure the driver to write cache files to another directory using the following environment variables:

SF_TEMPORARY_CREDENTIAL_CACHE_DIR=string

Specifies the location of the temporary credential cache file in a local directory. This can also be configured with the JVM option -Dnet.snowflake.jdbc.temporaryCredentialCacheDir=string on launch.

SF_OCSP_RESPONSE_CACHE_DIR=string

Specifies the location of the OCSP response cache file in a local directory. This can also be configured with the JVM option -Dnet.snowflake.jdbc.ocspResponseCacheDir=string on launch.

For more information, see OCSP Response Cache Server in this topic.

Note that the JVM options should be set on launch, and not programmatically (via System.setProperty()). If both environment variable and JVM options are provided, the JVM option will be used.

Configuring JDBC Logging

Starting with version 3.0.4, the JDBC driver supports two logging frameworks:

  • Java Core Logging Facilities

  • Simple Logging Facade for Java

Java Core Logging Facilities (java.util.logging)

To use this logger, specify the following option for the JVM:

-Dnet.snowflake.jdbc.loggerImpl=net.snowflake.client.log.JDK14Logger

Then, you can customize the logging configuration using the application programming interface (API) for the logger. For more details, see the java.util.logging Package documentation.

For example, create a file named logging.properties that includes the following contents:

###########################################################
#   Default Logging Configuration File
#
# You can use a different file by specifying a filename
# with the java.util.logging.config.file system property.
# For example java -Djava.util.logging.config.file=myfile
############################################################

############################################################
#   Global properties
############################################################

# "handlers" specifies a comma-separated list of log Handler
# classes.  These handlers will be installed during VM startup.
# Note that these classes must be on the system classpath.
# ConsoleHandler and FileHandler are configured here such that
# the logs are dumped into both a standard error and a file.
handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler

# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers.  For any given facility this global level
# can be overriden by a facility specific level.
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.
.level = INFO

############################################################
# Handler specific properties.
# Describes specific configuration information for Handlers.
############################################################

# default file output is in the tmp dir
java.util.logging.FileHandler.pattern = /tmp/snowflake_jdbc%u.log
java.util.logging.FileHandler.limit = 5000000000000000
java.util.logging.FileHandler.count = 10
java.util.logging.FileHandler.level = INFO
java.util.logging.FileHandler.formatter = net.snowflake.client.log.SFFormatter

# Limit the messages that are printed on the console to INFO and above.
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter = net.snowflake.client.log.SFFormatter

# Example to customize the SimpleFormatter output format
# to print one-line log message like this:
#     <level>: <log message> [<date/time>]
#
# java.util.logging.SimpleFormatter.format=%4$s: %5$s [%1$tc]%n

############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################

# Snowflake JDBC logging level.
net.snowflake.level = INFO
net.snowflake.handler = java.util.logging.FileHandler

Specify the JVM parameters in the command line:

java -jar application.jar -Dnet.snowflake.jdbc.loggerImpl=net.snowflake.client.log.JDK14Logger -Djava.util.logging.config.file=logging.properties

Where application.jar references the application code for the JDBC driver. The log files are located in /tmp/snowflake_jdbc*.

Simple Logging Facade for Java (org.slf4j)

If a logger implementation package (i.e. org.sl4j:sl4j-jdk14 or org.sl4j:slf4j-log4j12) or a custom logger (i.e. your own org.slf4j.impl.StaticLoggerBinder class) has been defined on the classpath, then the driver automatically uses this logger.

You can also explicitly choose to use this logger by specifying the following JVM option:

-Dnet.snowflake.jdbc.loggerImpl=net.snowflake.client.log.SLF4JLogger.

For more information, see the Simple Logging Facade for Java (SLF4J) documentation.

Note

If you do not explicitly specify a logger for the driver using either of the JVM options described above and you do not have a custom logger defined on the classpath (or you are using a driver version earlier than 3.0.4), the driver uses java.util.logging by default. However, the following default behavior applies:

  • You cannot specify where the log file is written. It is always written to the directory specified by the java.io.tmpDir system property:

    • In Linux and macOS environments, the default directory is usually either /tmp or /var/tmp.

    • In Windows environments, the default directory is usually C:\temp.

  • The logging level is determined by the tracing connection parameter (see above).

Non-standard JDBC API

The Snowflake JDBC driver supports an API outside of standard JDBC specification. To use this API, you might need to cast the interface (i.e. Connection, Statement, ResultSet, etc.) to the corresponding Snowflake implementation class (SnowflakeConnectionV1, SnowflakeStatementV1, SnowflakeResultSetV1, etc.).

One situation in which you might need to do this is when uploading data files directly from a stream to an internal stage, as described below.

Upload Data Files Directly from a Stream to an Internal Stage

You can upload data files using the PUT command. However, sometimes it make sense to transfer data directly from a stream to an internal (i.e. Snowflake) stage as a file. Here is the method exposed in the SnowflakeConnectionV1 class:

/**
 * Method to compress data from a stream and upload it at a stage location.
 * The data will be uploaded as one file. No splitting is done in this method.
 *
 * Caller is responsible for releasing the inputStream after the method is
 * called.
 *
 * @param stageName    stage name: e.g. ~ or table name or stage name
 * @param destPrefix   path / prefix under which the data should be uploaded on the stage
 * @param inputStream  input stream from which the data will be uploaded
 * @param destFileName destination file name to use
 * @param compressData compress data or not before uploading stream
 * @throws java.sql.SQLException failed to compress and put data from a stream at stage
 */
public void uploadStream(String stageName,
                         String destPrefix,
                         InputStream inputStream,
                         String destFileName,
                         boolean compressData)
    throws SQLException

Sample usage:

Connection connection = DriverManager.getConnection(url, prop);
File file = new File("/tmp/test.csv");
FileInputStream fileInputStream = new FileInputStream(file);

// upload file stream to usr stage
((SnowflakeConnectionV1) connection).uploadStream("MYSTAGE", "testUploadStream",
   fileInputStream, "destFile.csv", true);

Implementation Notes

This section describes implementation details.

Batch Updates

When the application binds data, it can call addBatch() one or more times followed by executeBatch() once to process multiple rows in a single batch. Here is an example that binds values to variables and then batches INSERTs into a table. The table contains an INTEGER column and a VARCHAR column.

Connection connection = DriverManager.getConnection(url, prop);
connection.setAutoCommit(false);

PreparedStatement pstmt = connection.prepareStatement("INSERT INTO t(c1, c2) VALUES(?, ?)");
pstmt.setInt(1, 101);
pstmt.setString(2, "test1");
pstmt.addBatch();

pstmt.setInt(1, 102);
pstmt.setString(2, "test2");
pstmt.addBatch();

int[] count = pstmt.executeBatch(); // After execution, count[0]=1, count[1]=1
connection.commit();

When binding is used, the Snowflake JDBC driver sends data to the Snowflake database using one of two methods, either Regular Array Binding or Stage Array Binding. The driver chooses which method to use depending on a threshold based on the size of the data array, i.e., the number of rows multiplied by the number of columns.

  • Regular Array Binding: If the data size (rows multiplied by columns) is less than the threshold, the JDBC driver converts the data to columnar arrays and submits the data to the Snowflake database along with the query. (Regular Array Binding is also used for non-batch binding. When used in non-batch mode, columnar array conversion is not needed and is not performed.)

  • Stage Array Binding: If the data size exceeds the threshold, then the JDBC driver generates CSV files from the data and sends those files to a temporary stage area from which the files are loaded.

The current threshold (rows multiplied by columns) is 65280.

The main motivation behind batching is to provide optimal performance by distributing the workloads. For small data sets, sending all data together with the query is straightforward; the Snowflake database can quickly process the query along with data. For large data sets, however, sending data directly to the Snowflake database impacts the service and might end up exhausting resources or even crashing the server instance, and the query is never completed. To limit peak resource consumption, the stage array binding method is chosen when the data size is over the threshold; it sends data to a temporary stage for ingestion so that the Snowflake database can split the workload of query processing and the data update.

Stage Array Binding is enabled by default and transparent to users. However, in order to take advantage of this feature, the user must have the following privilege on the schema:

  • CREATE STAGE.

If the user does not have this privilege, the Snowflake JDBC driver falls back to Regular Array Binding.

Also ensure the session has set the current database and schema when binding data. If you do not have these set, then if the driver chooses stage array binding, the driver will execute a CREATE TEMPORARY STAGE command, but that command will fail, and you’ll see the following query and error:

CREATE TEMPORARY STAGE SYSTEM$BIND file_format=(type=csv field_optionally_enclosed_by='"')
Cannot perform CREATE STAGE. This session does not have a current schema. Call 'USE SCHEMA', or use a qualified name.

Note

  • The use of a threshold (to switch between regular array binding and stage array binding) applies only to INSERT operations.

  • The threshold is subject to change.

  • Other Snowflake drivers don’t have this feature implemented yet.

Java Sample Program

For a working sample written in Java, right-click the name of the file, SnowflakeJDBCExample.java, and save the link/file to your local file system.

Troubleshooting Tips

Set Properties Carefully

The DriverManager.getConnection() method reads only the values of the Properties parameter that match specific, pre-defined names, for example “password”, “account”, etc. If you misspell a property name, or include extra properties, the driver ignores those properties without issuing an error or warning message. This can make it difficult to detect minor misspellings.

Use the Right Values for Connection String and Account

A common cause of connection problems is providing too much or too little information when asked for an account, account name, or connection string.

Account information typically looks like the following:

  • <account>

  • <account.region_ID>

  • <account.region_ID.platform_ID>

for example:

  • xy12345

  • xy12345.east-us-2

  • xy12345.east-us-2.azure

The JDBC Connection String typically looks similar to:

Using the connection string when the account information is expected (or vice versa) can lead to difficult-to-debug problems.

If you can’t establish a connection when you are asked for an account, try both the minimum form (account only, e.g. “xy12345”) and the full account name (account.region.platform, e.g. “xy12345.east-us-2.azure”, but without “.snowflakecomputing.com”), making sure to NOT include the “jdbc:snowflake://”.

If you can’t establish a connection when you are asked for a JDBC Connection String, use the full form of the connection string as shown above.

For more information about account names, see Snowflake Regions, especially the section Specifying Region Information in Your Hostname.