This is an advanced tutorial. If you are just getting started with Cube, I recommend checking this tutorial first and then coming back here.
One of the most powerful features of Cube is pre-aggregations. Coupled with data schema, it eliminates the need to organize, denormalize, and transform data before using it with Cube. The pre-aggregation engine builds a layer of aggregated data in your database during the runtime and maintains it to be up-to-date.
Upon an incoming request, Cube will first look for a relevant pre-aggregation. If it cannot find any, it will build a new one. Once the pre-aggregation is built, all the subsequent requests will go to the pre-aggregated layer instead of hitting the raw data. It could speed the response time by hundreds or even thousands of times.
Pre-aggregations are materialized query results persisted as tables. In order to start using pre-aggregations, Cube should have write access to the
stb_pre_aggregations schema where pre-aggregation tables will be stored.
Cube also takes care of keeping the pre-aggregation up-to-date. It performs refresh checks and if it finds that a pre-aggregation is outdated, it schedules a refresh in the background.
Creating a Simple Pre-Aggregation
Let’s take a look at the example of how we can use pre-aggregations to improve query performance.
For testing purposes, we will use a Postgres database and will generate around ten million records using the generate_series function.
The following SQL creates a table,
orders, and inserts a sample of generated records into it.
Next, create a new Cube application if you don’t have any.
Change the content of
.env in the project folder to the following.
Finally, generate a schema for the
orders table and start the Cube.js server.
Now, we can send a query to Cube with the
Orders.count measure and
Orders.createdAt time dimension with granularity set to
Cube will respond with Continue wait, because this query takes more than 5 seconds to process. Let’s look at Cube logs to see exactly how long it took for our Postgres to execute this query.
It took 6,514 milliseconds (6.5 seconds) for Postgres to execute the above query. Although we have an index on the
amount columns, it doesn't help a lot in this particular case since we're querying almost all the dates we have. The index would help if we query a smaller date range, but still, it would be a matter of seconds, not milliseconds.
We can significantly speed it up by adding a pre-aggregation layer. To do this, add the following
preAggregations block to
The block above instructs Cube to build and use a rollup type of pre-aggregation when the “Orders.amount” measure and “Orders.createdAt” time dimension (with “month” granularity) are requested together. You can read more about pre-aggregation options in the documentation reference.
Now, once we send the same request, Cube will detect the pre-aggregation declaration and will start building it. Once it's built, it will query it and send the result back. All the subsequent queries will go to the pre-aggregation layer.
Here is how querying pre-aggregation looks in the Cube logs:
As you can see, now it takes only 5 milliseconds (1,300 times faster) to get the same data.
Also, you can note that SQL has been changed and now it queries data from
stb_pre_aggregations.orders_amount_by_created, which is the table generated by Cube to store pre-aggregation for this query. The second query is a DDL statement for this pre-aggregation table.
Cube also takes care of keeping pre-aggregations up to date. Every two minutes on a new request Cube will initiate the refresh check.
You can set up a custom refresh check strategy by using refreshKey. By default, pre-aggregations are refreshed every hour.
If the result of the refresh check is different from the last one, Cube will initiate the rebuild of the pre-aggregation in the background and then hot swap the old one.
This guide is the first step to learning about pre-aggregations and how to start using them in your project. But there is much more you can do with them. You can find the pre-aggregations documentation reference here.
Also, here are some highlights with useful links to help you along the way.
Pre-aggregate queries across multiple cubes
Pre-aggregations work not only for measures and dimensions inside the single cube, but also across multiple joined cubes as well. If you have joined cubes, you can reference measures and dimensions from any part of the join tree. The example below shows how the
Users.country dimension can be used with the
Generate pre-aggregations dynamically
You can instruct Cube to partition pre-aggregations by time using the
partitionGranularity option. Cube will generate not a single table for the whole pre-aggregation, but a set of smaller tables. It can reduce the refresh time and cost in the case of BigQuery for example.