Monitoring Integrations
Cube Cloud allows exporting logs and metrics to external monitoring tools so you can leverage your existing monitoring stack and retain logs and metrics for the long term.
Monitoring integrations are available in Cube Cloud on Enterprise and above (opens in a new tab) product tiers. You can also choose a Monitoring Integrations tier.
Monitoring integrations suspend their work when a deployment goes to auto-suspension.
Monitoring integrations are only available for production environments.
Under the hood, Cube Cloud uses Vector (opens in a new tab), an open-source tool for collecting and delivering monitoring data. It supports a wide range of destinations (opens in a new tab), also known as sinks.
Guides
Monitoring integrations work with various popular monitoring tools. Check the following guides and configuration examples to get tool-specific instructions:
Configuration
To enable monitoring integrations, navigate to Settings → Monitoring Integrations and click Enable Vector to add a Vector agent to your deployment. You can use the dropdown to select a Monitoring Integrations tier.
Under Metrics export, you will see credentials for the
prometheus_exporter
sink, in case you'd like to setup metrics
export.
Additionally, create a vector.toml
configuration file (opens in a new tab)
next to your cube.js
file. This file is used to keep sinks configuration. You
have to commit this file to the main branch of your deployment for Vector
configuration to take effect.
Environment variables
You can use environment variables prefixed with CUBE_CLOUD_MONITORING_
to
reference configuration parameters securely in the vector.toml
file.
Example configuration for exporting logs to Datadog (opens in a new tab):
[sinks.datadog]
type = "datadog_logs"
default_api_key = "$CUBE_CLOUD_MONITORING_DATADOG_API_KEY"
Inputs for logs
Sinks accept the inputs
option that allows to specify which components of a
Cube Cloud deployment should export their logs:
Input name | Description |
---|---|
cubejs-server | Logs of API instances |
refresh-scheduler | Logs of the refresh worker |
warmup-job | Logs of the pre-aggregation warm-up |
cubestore | Logs of Cube Store |
Example configuration for exporting logs to Datadog (opens in a new tab):
[sinks.datadog]
type = "datadog_logs"
inputs = [
"cubejs-server",
"refresh-scheduler",
"warmup-job",
"cubestore"
]
default_api_key = "da8850ce554b4f03ac50537612e48fb1"
compression = "gzip"
When exporting Cube Store logs using the cubestore
input, you can filter logs
by providing an array of their severity levels via the levels
option. If not
specified, only error
and info
logs will be exported.
Level | Exported by default? |
---|---|
error | ✅ Yes |
info | ✅ Yes |
debug | ❌ No |
trace | ❌ No |
If you'd like to adjust severity levels of logs from API instances and the
refresh scheduler, use the CUBEJS_LOG_LEVEL
environment variable.
Sinks for logs
You can use a wide range of destinations (opens in a new tab) for logs, including the following ones:
- AWS Cloudwatch (opens in a new tab)
- AWS S3 (opens in a new tab), Google Cloud Storage (opens in a new tab), and Azure Blob Storage (opens in a new tab)
- Datadog (opens in a new tab)
Example configuration for exporting all logs, including all Cube Store logs to Azure Blob Storage (opens in a new tab):
[sinks.azure]
type = "azure_blob"
container_name = "my-logs"
connection_string = "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net"
inputs = [
"cubejs-server",
"refresh-scheduler",
"warmup-job",
"cubestore"
]
[sinks.azure.cubestore]
levels = [
"trace",
"info",
"debug",
"error"
]
Inputs for metrics
Metrics are exported using the metrics
input. Metrics will have their respective
metric names and_types: gauge
(opens in a new tab) or
counter
(opens in a new tab).
All metrics of the counter
type reset to zero at the midnight (UTC) and increment
during the next 24 hours.
You can filter metrics by providing an array of input names via the list
option.
Input name | Metric name, type | Description |
---|---|---|
cpu | cube_cpu_usage_ratio , gauge | CPU usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load |
memory | cube_memory_usage_ratio , gauge | Memory usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load |
requests-count | cube_requests_total , counter | Number of API requests to the deployment |
requests-success-count | cube_requests_success_total , counter | Number of successful API requests to the deployment |
requests-errors-count | cube_requests_errors_total , counter | Number of errorneous API requests to the deployment |
requests-duration | cube_requests_duration_ms_total , counter | Total time taken to process API requests, milliseconds |
requests-success-duration | cube_requests_duration_ms_success , counter | Total time taken to process successful API requests, milliseconds |
requests-errors-duration | cube_requests_duration_ms_errors , counter | Total time taken to process errorneous API requests, milliseconds |
You can further filter exported metrics by providing an array of inputs
. It applies to
metics only.
Example configuration for exporting all metrics from cubejs-server
to
Prometheus (opens in a new tab) using the prometheus_remote_write
sink:
[sinks.prometheus]
type = "prometheus_remote_write"
inputs = [
"metrics"
]
endpoint = "https://prometheus.example.com:8087/api/v1/write"
[sinks.prometheus.auth]
# Strategy, credentials, etc.
[sinks.prometheus.metrics]
list = [
"cpu",
"memory",
"requests-count",
"requests-errors-count",
"requests-success-count",
"requests-duration"
]
inputs = [
"cubejs-server"
]
Sinks for metrics
Metrics are exported in the Prometheus format which is compatible with the following sinks:
prometheus_exporter
(opens in a new tab) (native to Prometheus (opens in a new tab), compatible with Mimir (opens in a new tab))prometheus_remote_write
(opens in a new tab) (compatible with Grafana Cloud (opens in a new tab))
Example configuration for exporting all metrics from cubejs-server
to
Prometheus (opens in a new tab) using the
prometheus_exporter
sink:
[sinks.prometheus]
type = "prometheus_exporter"
inputs = [
"metrics"
]
[sinks.prometheus.metrics]
list = [
"cpu",
"memory",
"requests-count",
"requests-errors-count",
"requests-success-count",
"requests-duration"
]
inputs = [
"cubejs-server"
]
Navigate to Settings → Monitoring Integrations to take the
credentials prometheus_exporter
under Metrics export:
You can also customize the user name and password for prometheus_exporter
by
setting CUBE_CLOUD_MONITORING_METRICS_USER
and
CUBE_CLOUD_MONITORING_METRICS_PASSWORD
environment variables, respectively.