- The account ID (opens in a new tab) for Snowflake (opens in a new tab)
- The warehouse name in the Snowflake (opens in a new tab) account
- The region (opens in a new tab) for the Snowflake (opens in a new tab) warehouse
- The username/password for the Snowflake (opens in a new tab) account
Add the following to a
.env file in your Cube project:
CUBEJS_DB_TYPE=snowflake CUBEJS_DB_SNOWFLAKE_ACCOUNT=XXXXXXXXX.us-east-1 CUBEJS_DB_SNOWFLAKE_WAREHOUSE=MY_SNOWFLAKE_WAREHOUSE CUBEJS_DB_NAME=my_snowflake_database CUBEJS_DB_USER=snowflake_user CUBEJS_DB_PASS=**********
In some cases you'll need to allow connections from your Cube Cloud deployment IP address to your database. You can copy the IP address from either the Database Setup step in deployment creation, or from Settings → Configuration in your deployment.
The following fields are required when creating a Snowflake connection:
Cube Cloud also supports connecting to data sources within private VPCs. If you already have VPCs enabled in your account, check out the VPC documentation to learn how to get started.
|Environment Variable||Description||Possible Values||Required||Supports multiple data sources?|
|The Snowflake account identifier to use when connecting to the database||A valid Snowflake account ID (opens in a new tab)||✅||✅|
|The Snowflake region to use when connecting to the database||A valid Snowflake region (opens in a new tab)||❌||✅|
|The Snowflake warehouse to use when connecting to the database||A valid Snowflake warehouse (opens in a new tab) in the account||✅||✅|
|The Snowflake role to use when connecting to the database||A valid Snowflake role (opens in a new tab) in the account||❌||✅|
|The name of the database to connect to||A valid database name||✅||✅|
|The username used to connect to the database||A valid database username||✅||✅|
|The password used to connect to the database||A valid database password||✅||✅|
|The type of authenticator to use with Snowflake. Use ||❌||✅|
|The path to the private RSA key folder||A valid path to the private RSA key||❌||✅|
|The password for the private RSA key. Only required for encrypted keys||A valid password for the encrypted private RSA key||❌||✅|
|The number of concurrent connections each queue has to the database. Default is ||A valid number||❌||❌|
|The maximum number of concurrent database connections to pool. Default is ||A valid number||❌||✅|
Measures of type
be used in pre-aggregations when using Snowflake as a source database. To learn
more about Snowflake's support for approximate aggregate functions, click
here (opens in a new tab).
To learn more about pre-aggregation build strategies, head here.
|Feature||Works with read-only mode?||Is default?|
By default, Snowflake uses batching to build pre-aggregations.
No extra configuration is required to configure batching for Snowflake.
Ensure the AWS credentials are correctly configured in IAM to allow reads and writes to the export bucket in S3.
CUBEJS_DB_EXPORT_BUCKET_TYPE=s3 CUBEJS_DB_EXPORT_BUCKET=my.bucket.on.s3 CUBEJS_DB_EXPORT_BUCKET_AWS_KEY=<AWS_KEY> CUBEJS_DB_EXPORT_BUCKET_AWS_SECRET=<AWS_SECRET> CUBEJS_DB_EXPORT_BUCKET_AWS_REGION=<AWS_REGION>
When using an export bucket, remember to assign the Storage Object Admin
role to your BigQuery credentials (
Before configuring Cube, an integration must be created and configured in
Snowflake (opens in a new tab). Take note of the integration name
gcs_int from the example link) as you'll need it to configure Cube.
Once the Snowflake integration is set up, configure Cube using the following:
CUBEJS_DB_EXPORT_BUCKET=snowflake-export-bucket CUBEJS_DB_EXPORT_BUCKET_TYPE=gcp CUBEJS_DB_EXPORT_GCS_CREDENTIALS=<BASE64_ENCODED_SERVICE_CREDENTIALS_JSON> CUBEJS_DB_EXPORT_INTEGRATION=gcs_int
Cube does not require any additional configuration to enable SSL as Snowflake connections are made over HTTPS.