The SQL Runner is a tool that allows you to execute SQL queries from Cube Cloud on your data source or Cube Store. It can be used to inform the development of the data model, for ad-hoc querying as well as debugging SQL queries generated by Cube to execute against the data source.
The SQL Runner is available in Cube Cloud on all tiers (opens in a new tab).
To execute a query, enter the SQL query in the text area under SQL Editor and click ▶ Run. The query results will be displayed under Results, along with the row count and query execution time:
The SQL Runner can run queries against any configured data sources, which is helpful for diagnosing database-specific issues. It can also run queries against Cube Store, which is useful for testing pre-aggregations directly to see if they return expected results. You can switch data source(s) by clicking the dropdown under Data Source:
The SQL Runner also allows executing queries against configured data
sources using a specific security context, which is
particularly convenient for debugging queries in a multi-tenant
configuration. The SQL Runner can be
configured to use predefined security contexts from
scheduledRefreshContexts in the
configuration file, or a custom context can be provided as a JSON string.
Specifying a security context is optional, and if none is provided, the query will be executed with the default security context. If one is provided, then the Schema Explorer will reload to reflect the data source available to the security context.
If you have configured
scheduledRefreshContexts in your deployment,
you can choose a context to execute the query with. Click the dropdown under
Security Context, then use the Scheduled Refresh Contexts tab to
select an existing context:
The SQL Runner also allows providing an ad-hoc security context as a JSON string. From the same dropdown under Security Context, click the Custom Context tab and enter a valid JSON string and click Apply:
The Schema Explorer allows you to view details of the data source's schema,
including tables and their columns and types. This is useful for ensuring that
properties of data models match the underlying schema (i.e. a
in a data model should be a