Installing and Configuring the Spark Connector¶
Multiple versions of the connector are supported; however, Snowflake strongly recommends using the most recent version of the connector. To view release information about the latest version, see the Spark Connector Release Notes (link in the sidebar).
The instructions in this topic can be used to install and configure all supported versions of the connector.
In this Topic:
Snowflake supports multiple versions of the connector:
Supported Spark versions:
Spark 2.4, 2.3, 2.2
Supported Scala versions:
Scala 2.12, 2.11, 2.10
Data source name:
Package name (for imported classes):
spark-snowflake (GitHub): .
The developer notes for the different versions are hosted with the source code.
The Snowflake Spark Connector generally supports the three most recent versions of Spark. Download a version of the connector that is specific to your Spark version.
For example, to use version 2.4.14 of the connector with the older Spark version 2.2, download the
2.4.14-spark_2.2 version of the connector.
To install and use Snowflake with Spark, you need the following:
Snowflake Connector for Spark.
Snowflake JDBC Driver.
Apache Spark environment, either self-hosted or hosted in any of the following:
In addition, you can use a dedicated AWS S3 bucket or Azure Blob storage container as a staging zone between the two systems; however, this is not required with version 2.2.0 (and higher) of the connector, which uses a temporary Snowflake internal stage (by default) for all data exchange.
The role used in the connection needs USAGE and CREATE STAGE privileges on the schema that contains the table that you will read from or write to.
If you are using Databricks or Qubole to host Spark, you do not need to download or install the Snowflake Connector for Spark (or any of the other requirements). Both Databricks and Qubole have integrated the connector to provide native connectivity.
For more details, see:
Verifying the OCSP Connector or Driver Version¶
Snowflake uses OCSP to evaluate the certificate chain when making a connection to Snowflake. The driver or connector version and its configuration both determine the OCSP behavior. For more information about the driver or connector version, their configuration, and OCSP behavior, see OCSP Client & Driver Configuration.
Downloading and Installing the Connector¶
The instructions in this section pertain to version 2.x and higher of the Snowflake Connector for Spark.
Snowflake periodically releases new versions of the connector. The following installation tasks must be performed each time you install a new version. This also applies to the Snowflake JDBC driver, which is a prerequisite for the connector.
Step 1: Download the Latest Version of the Snowflake JDBC Driver¶
The Snowflake JDBC Driver is required in order to use the Snowflake Spark Connector.
The Snowflake JDBC driver is provided as a standard Java package through the Maven Central Repository. You
can either download the package as a
.jar file or you can directly reference the package. These instructions assume you are referencing the package.
For more details, see Downloading / Integrating the JDBC Driver.
Step 2: Download the Latest Version of the Snowflake Connector for Spark¶
Snowflake provides multiple versions of the connector. You will need to download the appropriate version, based on the following:
Version of the Snowflake Connector for Spark you wish to use.
Version of Spark you are using.
Version of Scala you are using.
The Snowflake Spark Connector can be downloaded from either Maven or the Spark Packages web site. The source code can be downloaded from Github.
Maven Central Repository¶
Separate package artifacts are provided for each supported Scala version (2.10, 2.11, and 2.12). To download the latest version of the connector from Maven:
The following screenshot provides an example of the download page for the Spark connector on the Maven web site:
The individual packages use the following naming convention:
N.N.Nis the Snowflake version (e.g. 2.4.14).
P.Pis the Spark version (e.g. 2.4).
To download the latest version of the connector from the Spark Packages web site, click this link.
Snowflake uses the following naming conventions for the packages:
N.N.Nis the Snowflake version (e.g. 2.4.5).
C.Cis the Scala version (e.g. 2.11).
P.Pis the earlier Spark version (e.g. 2.2).
The source code for the Spark Snowflake Connector is available on GitHub. However, the compiled packages are not available on GitHub. You can download the compiled packages from Maven or the Spark Packages web site as described in the previous sections (in this topic).
Step 3 (Optional): Verify the Snowflake Connector for Spark Package Signature (Linux Only)¶
The macOS and Windows operating systems can verify the installer signature automatically, so GPG signature verification is not needed.
To optionally verify the Snowflake Connector for Spark package signature for Linux:
Download and import the latest Snowflake GPG public key from the public keyserver:
$ gpg --keyserver hkp://keys.gnupg.net --recv-keys EC218558EABB25A1
If reinstalling Spark Snowflake Connector version 2.4.12 or lower, use GPG key ID 93DB296A69BE019A instead of EC218558EABB25A1.
Download the GPG signature along with the bash installer and verify the signature:
$ gpg --verify spark-snowflake_2.12-2.4.14-spark_2.2.jar gpg: Signature made Wed 22 Feb 2017 04:31:58 PM UTC using RSA key ID EC218558EABB25A1 gpg: Good signature from "Snowflake Computing <email@example.com>"
Your local environment can contain multiple GPG keys; however, for security reasons, Snowflake periodically rotates the public GPG key. As a best practice, we recommend deleting the existing public key after confirming that the latest key works with the latest signed package. For example:
$ gpg --delete-key "Snowflake Computing"
Step 4: Configure the Local Spark Cluster or Amazon EMR-hosted Spark Environment¶
If you have a local Spark installation, or a Spark installation in Amazon EMR, you
need to configure the
spark-shell program to include both the Snowflake JDBC driver and the Spark Connector:
To include the Snowflake JDBC driver, use the
--packageoption to reference the JDBC package hosted in the Maven Central Repository, providing the exact version of the driver you wish to use (e.g.
To include the Spark Connector, use the
--packageoption to reference the appropriate package ( Scala 2.10 or Scala 2.11 or Scala 2.12 ) hosted in the Maven Central Repository, providing the exact version of the driver you want to use (e.g.
spark-shell --packages net.snowflake:snowflake-jdbc:3.8.0,net.snowflake:spark-snowflake_2.12:2.4.14
Installing Additional Packages (If Needed)¶
Depending on your Spark installation, some packages required by the connector may be missing. You can add missing packages to your installation by using the appropriate
--jars(if the packages were downloaded as
The required packages are listed below, with the syntax (including version number) for using the
--packages flag to reference the packages:
For example, if the Apache packages are missing, to add the packages by reference:
spark-shell --packages org.apache.hadoop:hadoop-aws:2.7.1,org.apache.httpcomponents:httpclient:4.3.6,org.apache.httpcomponents:httpcore:4.3.3
Preparing an External Location For Files¶
You might need to prepare an external location for files that you want to transfer between Snowflake and Spark.
This task is required if either of the following situations is true:
You will run jobs that take longer than 36 hours, which is the maximum duration for the token used by the connector to access the internal stage for data exchange.
The Snowflake Connector for Spark version is 2.1.x or lower (even if your jobs require less than 36 hours).
If you are not currently using v2.2.0 (or higher) of the connector, Snowflake strongly recommends upgrading to the latest version.
Preparing an AWS External S3 Bucket¶
Prepare an external S3 bucket that the connector can use to exchange data between Snowflake and Spark. You then provide the location information, together with the necessary AWS credentials for the location, to the connector. For more details, see Authenticating S3 for Data Exchange in the next topic.
If you use an external S3 bucket, the connector does not automatically remove any intermediate/temporary data from this location. As a result, it’s best to use a specific bucket or path (prefix) and set a lifecycle policy on the bucket/path to clean up older files automatically. For more details on configuring a lifecycle policy, see the Amazon S3 documentation.
Preparing an Azure Blob Storage Container¶
Prepare an external Azure Blob storage container that the connector can use to exchange data between Snowflake and Spark. You then provide the location information, together with the necessary Azure credentials for the location, to the connector. For more details, see Authenticating Azure for Data Exchange in the next topic.