Monitoring Integrations
Cube Cloud allows exporting logs and metrics to external monitoring tools so you can leverage your existing monitoring stack and retain logs and metrics for the long term.
Monitoring integrations are available in Cube Cloud on Enterprise (opens in a new tab) plan. Users can choose a Monitoring Integrations tier.
Monitoring integrations suspend their work when a deployment goes to auto-suspension.
Under the hood, Cube Cloud uses Vector (opens in a new tab), an open-source tool for collecting and delivering monitoring data. It supports a wide range of destinations (opens in a new tab), also known as sinks.
Guides
Monitoring integrations work with various popular monitoring tools. Check the following guides and configuration examples to get tool-specific instructions:
Configuration
To enable monitoring integrations, navigate to Settings → Monitoring Integrations and click Enable Vector to add a Vector agent to your deployment. You can use the dropdown to select a Monitoring Integrations tier.
Under Metrics export, you will see credentials for the
prometheus_exporter
sink, in case you'd like to setup metrics
export.
Additionally, create a vector.toml
configuration file (opens in a new tab)
next to your cube.js
file. This file is used to keep sinks configuration. You
have to commit this file to the main branch of your deployment for Vector
configuration to take effect.
Environment variables
You can use environment variables prefixed with CUBE_CLOUD_MONITORING_
to
reference configuration parameters securely in the vector.toml
file.
Example configuration for exporting logs to Datadog (opens in a new tab):
[sinks.datadog]
type = "datadog_logs"
default_api_key = "$CUBE_CLOUD_MONITORING_DATADOG_API_KEY"
Inputs for logs
Sinks accept the inputs
option that allows to specify which components of a
Cube Cloud deployment should export their logs. Supported inputs:
cubejs-server
refresh-scheduler
ext-db
warmup-job
cubestore
Example configuration for exporting logs to Datadog (opens in a new tab):
[sinks.datadog]
type = "datadog_logs"
inputs = [
"cubejs-server",
"refresh-scheduler",
"ext-db",
"warmup-job",
"cubestore"
]
default_api_key = "da8850ce554b4f03ac50537612e48fb1"
compression = "gzip"
When exporting Cube Store logs using the cubestore
input, you can filter logs
by providing an array of their severity levels via the levels
option. If not
specified, only error
and info
logs will be exported.
Level | Exported by default? |
---|---|
error | ✅ Yes |
info | ✅ Yes |
debug | ❌ No |
trace | ❌ No |
If you'd like to adjust severity levels of logs from API instances and the
refresh scheduler, use the CUBEJS_LOG_LEVEL
environment variable.
Sinks for logs
You can use a wide range of destinations (opens in a new tab) for logs, including the following ones:
- AWS Cloudwatch (opens in a new tab)
- AWS S3 (opens in a new tab), Google Cloud Storage (opens in a new tab), and Azure Blob Storage (opens in a new tab)
- Datadog (opens in a new tab)
Example configuration for exporting all logs, including all Cube Store logs to Azure Blob Storage (opens in a new tab):
[sinks.azure]
type = "azure_blob"
container_name = "my-logs"
connection_string = "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net"
inputs = [
"cubejs-server",
"refresh-scheduler",
"ext-db",
"warmup-job",
"cubestore"
]
[sinks.azure.cubestore]
levels = [
"trace",
"info",
"debug",
"error"
]
Inputs for metrics
Metrics are exported using the metrics
input. You can filter them by providing
an array of metric names via the list
option.
Name | Type | Applies to | Description |
---|---|---|---|
cpu | gauge (opens in a new tab) | Node of a deployment | Percent of free CPU against requests |
memory | gauge (opens in a new tab) | Node of a deployment | Percent of free Memory against requests |
requests-count | counter (opens in a new tab) | Deployment | Total number of processed requests |
requests-errors-count | counter (opens in a new tab) | Deployment | Number of requests processed with errors |
requests-success-count | counter (opens in a new tab) | Deployment | Number of requests processed successfully |
requests-duration | counter (opens in a new tab) | Deployment | Total time taken to process requests (seconds) |
You can further filter exported metrics by providing an array of inputs
that
applies to metics only.
Example configuration for exporting all metrics from cubejs-server
to
Prometheus (opens in a new tab) using the prometheus_remote_write
sink:
[sinks.prometheus]
type = "prometheus_remote_write"
inputs = [
"metrics"
]
endpoint = "https://prometheus.example.com:8087/api/v1/write"
[sinks.prometheus.auth]
# Strategy, credentials, etc.
[sinks.prometheus.metrics]
list = [
"cpu",
"memory",
"requests-count",
"requests-errors-count",
"requests-success-count",
"requests-duration"
]
inputs = [
"cubejs-server"
]
Sinks for metrics
Metrics are exported in the Prometheus format which is compatible with the following sinks:
prometheus_exporter
(opens in a new tab) (native to Prometheus (opens in a new tab), compatible with Mimir (opens in a new tab))prometheus_remote_write
(opens in a new tab) (compatible with Grafana Cloud (opens in a new tab))
Example configuration for exporting all metrics from cubejs-server
to
Prometheus (opens in a new tab) using the
prometheus_exporter
sink:
[sinks.prometheus]
type = "prometheus_exporter"
inputs = [
"metrics"
]
[sinks.prometheus.metrics]
list = [
"cpu",
"memory",
"requests-count",
"requests-errors-count",
"requests-success-count",
"requests-duration"
]
inputs = [
"cubejs-server"
]
Navigate to Settings → Monitoring Integrations to take the
credentials prometheus_exporter
under Metrics export:
You can also customize the user name and password for prometheus_exporter
by
setting CUBE_CLOUD_MONITORING_METRICS_USER
and
CUBE_CLOUD_MONITORING_METRICS_PASSWORD
environment variables, respectively.