It’s well known that doing analytics on production RDBMS instances is prohibitive due to lock and performance issues. The introduction of replication by many of the modern RDBMSs made this process a little bit easier, but not ideal. The next evolution step here is Serverless RDBMSs. Splitting processing power from the storage brings our understanding of how to work with a RDBMS to a new level. This gives us an opportunity for infinite DB infrastructure scaling and allows us to have both OLTP and OLAP workload at the same time.
In this article, we perform an analytics performance benchmark of AWS Aurora MySQL using the Cube.js Serverless Analytics Framework.
TL;DR: Unoptimized 100M rows test dataset ran by Serverless MySQL Aurora can be queried in 176 seconds. A query time of less than 200ms is achievable for the same dataset using multi-stage querying approach.
Aurora MySQL Setup
Setup of Aurora MySQL is fairly simple. Just go to your RDS Service in AWS Console and select create database.
Here you should select the Aurora and MySQL compatible version that supports Serverless deployment. Then choose Serverless and enter your credentials to access your DB. That’s it! Your Aurora instance is configured and ready to launch.
Preparing your dataset
For the purpose of testing, we’ll generate a 100M row dataset out of Sakila Sample Database. In order to do that, we’ll use slightly modified versions of Sakila tables.
First we’ll introduce the
Quick hint: Aurora can’t be accessed using a public IP and is accessible only within VPC. To connect to it from your machine, you should use either VPN/SSH Tunnel or you can use AWS RDS query editor, which works just inside browser.
To fill the
payment tables we’ll use a sample of the first 600
customer rows and the first 500
payment rows from Sakila Sample Database.
We’ll then use the
payment table to generate actual records for the 100M
payment_big table. Its definition is very similar to
As MySQL doesn’t have a generator feature, we’ll introduce some helper views for that.
Integer number generator up to 16:
Integer number generator up to 256:
To generate actual payments, we’ll use the following SQL, which generates 2,048,000 rows at once:
We’ll call it 50 times to get a table with approximately 100M rows.
Cube.js is an open-source analytics framework that works as an intermediate between your analytics users and your database. It provides analytic SQL generation, query results caching and execution orchestration, data pre-aggregation, security, API for query results fetch, and visualization. We’ll use it mostly for caching and pre-aggregation to get a sense of how fast Aurora MySQL analytics can be and what the delay is between data ingestion and getting insight from it.
To create a Cube.js application, you’ll need to install Cube.js CLI and Serverless CLI for deployment:
Then, let’s create an app:
serverless.yml you should define credentials to access your Aurora MySQL and Redis instance as well as provide vpc settings to access your resources. Please learn more about deployment here.
We’ll also introduce Cube.js schema definitions for our benchmark.
That’s everything we need to deploy it so let’s do it:
If everything is set up correctly, you should see something like this in your shell:
Go to your main GET endpoint link which should look like this:
If everything is done right, you should see a code sandbox with a sample pie chart.
Let’s create our time series chart for the
Payments cube. To do this let’s add
moment import to
You should add it as a dependency as well using the blue
Add Dependency button or
After edits you should get an
index.js similar to this:
If everything works well, you should see a line chart similar to this one:
At this point we have an unoptimized data set of 100M records in our
payment_big table. We’ll use serverless logs to see how long it takes to process queries on the Aurora MySQL side as Cube.js writes these processing logs by default. To enable log tail we’ll use:
If we request one month of data we’ll get a processing delay of 176 seconds:
Not too bad for an unoptimized table of 100M data points. Let’s add an index for
payment_date to see if it can affect this time:
The same query with index will be processed in 31 seconds which is great but still not fast enough to build a great user experience. What we can do is introduce pre-aggregations for this table. To do this let’s just add:
schema/Payments.js so we can get:
It’ll create a rollup table for each day of data and will refresh it incrementally. Here
partitionGranularity instructs Cube.js to create rollup table per day of data. Rollup table itself will contain
count measure and
date dimensions with
day granularity. In fact every rollup table here will contain just 1 row which contains
count for specific day calculated. Let’s deploy it and see how it changes query processing times:
If we request the same interval of data, Cube.js will build pre-aggregations first for each day, which takes 1.5 seconds per one day:
And then it will query the union of all pre-aggregations tables, which takes less than 200ms:
This approach allows us to split the analytics workload into small reusable chunks to avoid DB burst with an incremental update strategy.
We were playing with just a test dataset and the results are very well correlated with what we see in production workloads. MySQL is capable of handling several billions of data points per table with an analytics workload pretty well. In this case, sub second query times are achievable with several seconds time to insight delay using properly organized query orchestration.
Although Serverless Aurora MySQL is capable of handling an enormous workload while stress testing, we discovered that workload routing algorithms aren’t smart enough yet to route queries to the least loaded nodes, which under certain circumstances can lead to partial database outages while analytics querying. For production environments we recommend you use provisioned Aurora MySQL with read replicas or the Parallel Query feature in order to split your OLTP and OLAP workload by design.