Documentation
Monitoring Integrations

Monitoring Integrations

Cube Cloud allows exporting logs and metrics to external monitoring tools so you can leverage your existing monitoring stack and retain logs and metrics for the long term.

Monitoring integrations are available in Cube Cloud on Enterprise and above (opens in a new tab) product tiers. You can also choose a Monitoring Integrations tier.

Monitoring integrations suspend their work when a deployment goes to auto-suspension.

Monitoring integrations are only available for production environments.

Under the hood, Cube Cloud uses Vector (opens in a new tab), an open-source tool for collecting and delivering monitoring data. It supports a wide range of destinations (opens in a new tab), also known as sinks.

Guides

Monitoring integrations work with various popular monitoring tools. Check the following guides and configuration examples to get tool-specific instructions:

Configuration

To enable monitoring integrations, navigate to Settings → Monitoring Integrations and click Enable Vector to add a Vector agent to your deployment. You can use the dropdown to select a Monitoring Integrations tier.

Under Metrics export, you will see credentials for the prometheus_exporter sink, in case you'd like to setup metrics export.

Additionally, create a vector.toml configuration file (opens in a new tab) next to your cube.js file. This file is used to keep sinks configuration. You have to commit this file to the main branch of your deployment for Vector configuration to take effect.

Environment variables

You can use environment variables prefixed with CUBE_CLOUD_MONITORING_ to reference configuration parameters securely in the vector.toml file.

Example configuration for exporting logs to Datadog (opens in a new tab):

[sinks.datadog]
type = "datadog_logs"
default_api_key = "$CUBE_CLOUD_MONITORING_DATADOG_API_KEY"

Inputs for logs

Sinks accept the inputs option that allows to specify which components of a Cube Cloud deployment should export their logs:

Input nameDescription
cubejs-serverLogs of API instances
refresh-schedulerLogs of the refresh worker
warmup-jobLogs of the pre-aggregation warm-up
cubestoreLogs of Cube Store
query-historyQuery History export

Example configuration for exporting logs to Datadog (opens in a new tab):

[sinks.datadog]
type = "datadog_logs"
inputs = [
  "cubejs-server",
  "refresh-scheduler",
  "warmup-job",
  "cubestore"
]
default_api_key = "da8850ce554b4f03ac50537612e48fb1"
compression = "gzip"

When exporting Cube Store logs using the cubestore input, you can filter logs by providing an array of their severity levels via the levels option. If not specified, only error and info logs will be exported.

LevelExported by default?
error✅ Yes
info✅ Yes
debug❌ No
trace❌ No

If you'd like to adjust severity levels of logs from API instances and the refresh scheduler, use the CUBEJS_LOG_LEVEL environment variable.

Sinks for logs

You can use a wide range of destinations (opens in a new tab) for logs, including the following ones:

Example configuration for exporting all logs, including all Cube Store logs to Azure Blob Storage (opens in a new tab):

[sinks.azure]
type = "azure_blob"
container_name = "my-logs"
connection_string = "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net"
inputs = [
  "cubejs-server",
  "refresh-scheduler",
  "warmup-job",
  "cubestore"
]
 
[sinks.azure.cubestore]
levels = [
  "trace",
  "info",
  "debug",
  "error"
]

Inputs for metrics

Metrics are exported using the metrics input. Metrics will have their respective metric names and_types: gauge (opens in a new tab) or counter (opens in a new tab).

All metrics of the counter type reset to zero at the midnight (UTC) and increment during the next 24 hours.

You can filter metrics by providing an array of input names via the list option.

Input nameMetric name, typeDescription
cpucube_cpu_usage_ratio, gaugeCPU usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load
memorycube_memory_usage_ratio, gaugeMemory usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load
requests-countcube_requests_total, counterNumber of API requests to the deployment
requests-success-countcube_requests_success_total, counterNumber of successful API requests to the deployment
requests-errors-countcube_requests_errors_total, counterNumber of errorneous API requests to the deployment
requests-durationcube_requests_duration_ms_total, counterTotal time taken to process API requests, milliseconds
requests-success-durationcube_requests_duration_ms_success, counterTotal time taken to process successful API requests, milliseconds
requests-errors-durationcube_requests_duration_ms_errors, counterTotal time taken to process errorneous API requests, milliseconds

You can further filter exported metrics by providing an array of inputs. It applies to metics only.

Example configuration for exporting all metrics from cubejs-server to Prometheus (opens in a new tab) using the prometheus_remote_write sink:

[sinks.prometheus]
type = "prometheus_remote_write"
inputs = [
  "metrics"
]
endpoint = "https://prometheus.example.com:8087/api/v1/write"
 
[sinks.prometheus.auth]
# Strategy, credentials, etc.
 
[sinks.prometheus.metrics]
list = [
  "cpu",
  "memory",
  "requests-count",
  "requests-errors-count",
  "requests-success-count",
  "requests-duration"
]
inputs = [
  "cubejs-server"
]

Sinks for metrics

Metrics are exported in the Prometheus format which is compatible with the following sinks:

Example configuration for exporting all metrics from cubejs-server to Prometheus (opens in a new tab) using the prometheus_exporter sink:

[sinks.prometheus]
type = "prometheus_exporter"
inputs = [
  "metrics"
]
 
[sinks.prometheus.metrics]
list = [
  "cpu",
  "memory",
  "requests-count",
  "requests-errors-count",
  "requests-success-count",
  "requests-duration"
]
inputs = [
  "cubejs-server"
]

Navigate to Settings → Monitoring Integrations to take the credentials prometheus_exporter under Metrics export:

You can also customize the user name and password for prometheus_exporter by setting CUBE_CLOUD_MONITORING_METRICS_USER and CUBE_CLOUD_MONITORING_METRICS_PASSWORD environment variables, respectively.

Query History export

With Query History export, you can bring Query History data to an external monitoring solution for further analysis, for example:

  • Detect queries that do not hit pre-aggregations.
  • Set up alerts for queries that exceed a certain duration.
  • Attribute usage to specific users and implement chargebacks.

Query History export requires the M tier of Monitoring Integrations.

To configure Query History export, add the query-history input to the inputs option of the sink configuration. Example configuration for exporting Query History data to the standard output of the Vector agent:

[sinks.my_console]
type = "console"
inputs = [
  "query-history"
]
target = "stdout"
encoding = { codec = "json" }

Exported data includes the following fields:

FieldDescription
trace_idUnique identifier of the API request.
account_nameName of the Cube Cloud account.
deployment_idIdentifier of the deployment.
environment_nameName of the environment, NULL for production.
api_typeType of data API used (rest, sql, etc.), NULL for errors.
api_queryQuery executed by the API, represented as string.
security_contextSecurity context of the request, represented as a string.
statusStatus of the request: success or error.
error_messageError message, if any.
start_time_unix_msStart time of the execution, Unix timestamp in milliseconds.
end_time_unix_msEnd time of the execution, Unix timestamp in milliseconds.
api_response_duration_msDuration of the execution in milliseconds.
cache_typeCache type: no_cache, pre_aggregations_in_cube_store, etc.

See this recipe for an example of analyzing data from Query History export.