TLDR; We decided to move on from Redis®* for queue and in-memory cache management and migrate it to the Cube Store. That means we’ll deprecate Redis for cache and queue management later this year and fully remove its support in new versions of Cube in 2023.

Cube’s mission is to be a semantic layer that powers your data applications with consistent, secure, and performant data. One of the ways we’ve been guaranteeing data performance has been with Cube’s queue and in-memory caching layers. Until now, these components were based on Redis, an open source, in-memory data store.

However, as Cube has been scaling and seeing significant expansion in its use cases, we’ve realized using Redis doesn’t make sense anymore. Instead, we’ve come to need an internal, more specialized solution to avoid unnecessary dependencies and execute operations smoothly. And so, we’re beginning to sunset its support soon—and are excited to migrate queue management and in-memory cache to Cube Store.

* Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Cube is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Cube.

How Cube’s in-memory cache and queue management work

Let’s start with some context. Cube has a two-level caching system: pre-aggregations and in-memory cache. There are also Cube’s queues, which sit in front of databases to provide a concurrency burst safety net in case there are too many simultaneous requests. Cube’s in-memory cache and queues allow you to scale stateless API instances horizontally and make APIs idempotent; they are currently powered by Redis.

Cube’s queues act as a buffer for your database when there's a burst of requests hitting the same data from multiple concurrent users. Its job is to ensure that the data sources are not overloaded and that multi-stage queries are executed and refreshed in the correct order. Cube’s in-memory cache solves a pretty similar problem, offloading query workloads from databases that shouldn’t be refreshed yet by refreshKey rules. The in-memory cache also offloads cluster coordination for query execution costs as its overhead is noticeable in deployments with high concurrency.

Cube currently maintains in-memory cache, query execution queues for pre-aggregations, and data queries in Redis. The architecture built with Cube and Redis enables the queues to be idempotent—meaning that if multiple identical queries come in, only one will run against the data source.

Idempontent Multi-stage Query Execution - Copy of Data Flow Diagram (Logical) Example

Why is Redis not a fit anymore?

Redis is a unique piece of software that scales very well for a wide variety of use cases. However, Cube continues to grow in terms of use case sophistication, performance requirements, and consistency guarantees.

Redis requires too many complex instructions for simple operations like atomically pushing to a queue, executing mutually exclusive queries, and idempotent result waiting on client nodes. It also yields an excess of optimistic concurrency clashes. As a result, Redis cluster coordination consumes a lot of resouces under the high load, which creates a bottleneck to scaling the API concurrency.

Finally, even with a lot of community effort, we couldn’t provide stable support for clustered Redis. All of these issues made it clear to our users and us that the current queue management and in-memory caching systems need simplification.

How will Cube Store replace Redis?

The Cube Store implementation will closely mimick Redis’ functionality. However, we’re going to replace lengthy command batches with single atomic domain-specific instructions. This change will drastically reduce clashes. Additionally, it’ll significantly simplify cache and queue access flow, which should drive a significant throughput boost.

Cube Store will also employ similar distributed LRU caching semantics as Redis. Unlike Redis, however, it’ll store data in binary format utilizing columnar compression strategies in cases where it makes sense to do so.

Most importantly to you—in-memory cache and queue management in Cube Store will use the same SQL protocol as pre-aggregation management, which is already occurring in Cube Store. So, you’ll be able to use Cube Store for in-memory cache and queue management without any reconfiguration or additional setup.

Timeline

We’re planning to release initial support for queue management in Cube Store and issue a deprecation notice for using Redis with Cube in November 2022. We’ll fully remove Redis support in Cube around Q2 2023.

Goodbye, Redis—and hello, Cube Store

And so, we’re gradually saying goodbye to Redis support.

It’s been a great run, but bigger and better performance is coming for Cube users: in-memory cache and queue management with Cube Store is on the horizon.

  • Have questions about this change? Ping me in our Slack.

  • Need help migration to Cube Store-based queue management? Contact our team via our contact page.

And, if we’ve piqued your curiosity but you don’t have a free Cube Cloud account yet, sign up here.