Redis is one of the most popular distributed in-memory data store systems. Over the last few years, many developers have been using it not only as a NoSQL database but also as a performant cache and message queue. Thanks to its design, Redis offers low-latency reads and writes, which makes it a very widely used technology in modern programming.
The technology is so popular that many cloud providers, including Amazon AWS, have been using it and offering it to their customers.
However, in March 2024, Redis announced a shift in its license model1:
Beginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD).
As a consequence, some corporate contributors, including Amazon AWS and Google Cloud, announced an open source fork, Valkey, based on the last open source version of Redis (7.2.4).
At the time of writing, Redis and Valkey have several overlaps in their features. Nonetheless, the two technologies are developed and backed by different teams. Valkey 8.0, released in September 2024, comes with several key differences from Redis. In the future, we can expect more divergence.
In this article, we’ll compare Valkey and Redis, highlighting their differences with special attention to performance, pricing, support, and observability.
As we saw above, the differences between Valkey and Redis have been growing with each release.
Being that Valkey is a fork of Redis, the performance of the two is similar. With its 8.0.0 release, however, Valkey has pushed its limits even further. In particular, an optimization of the way the SUNION
and SDIFF
commands handle temporary set objects resulted in a 41% and 27% performance boost, respectively2.
Additionally, Valkey now supports multi-threading for input/output and command execution. Redis, on the other hand, is still single-threaded for most operations.
Valkey also recently implemented experimental support for RDMA (remote direct memory access), which Redis still lacks. RDMA enables nodes in a network to exchange data in the main memory without involving the CPU, cache, or operating system of any node. This means RDMA has better performance than TCP.
So far, the tests on Valkey Over RDMA have shown a ~2.5x boost in the queries per second QPS and lower latency.
Lastly, Valkey 8.1 will introduce a new implementation of the dictionary, which is more memory and cache-efficient. This is done via a new memory-efficient implementation of the hash table used to store Valkey keys.
Valkey and Redis support the same persistence strategies:
Choosing a persistence strategy can be difficult, as each comes with tradeoffs, and each strategy can be configured as needed. In Valkey, we have to choose – and configure – one.
Redis, on the other hand, offers a paid alternative, named Redis Enterprise, that offers six built-in persistence options:
Therefore, Redis Enterprise offers simplified strategies if we don’t have specific requirements. Other than that, Redis and Valkey have no relevant differences in this area.
Both Redis and Valkey come with several metrics we can use to evaluate and tweak their behavior. For example, both allow us to monitor the latency in the system to track and troubleshoot potential latency issues.
More generally, both systems offer an INFO
command that returns information on the system. It includes, among other data, the latency (as we saw above), the server and replication configuration, memory and CPU usage, and error statistics. This is the way cloud providers (such as Amazon AWS) collect metrics for the Redis and Valkey systems they manage.
Valkey, however, recently introduced per-slot metrics. Cluster hash slots are the way Redis Cluster and Valkey Cluster manage how data is partitioned within the cluster. For example, there are 16,384 slots in a Redis Cluster and each key is uniquely mapped to only one of them. In particular, version 8.0 introduced the CLUSTER SLOT-STATS
command, returning usage statistics for the slots assigned to the current cluster shard. At the time of writing, this includes the following metrics:
Furthermore, there are plans to add more memory-related information in Valkey 8.2.
Redis, on the other hand, does not provide any statistics on the slots of a cluster. This makes Valkey a preferable choice if we need fine-grained observability of our systems.
Valkey was proposed and is actively backed by many cloud providers, including Amazon AWS and Google Cloud. Furthermore, developers from Oracle, Ericsson, and Snap Inc. are known to be contributing to Valkey to enhance its performance, scalability, and integration among different environments.
Redis is mainly maintained by Redis Inc., which drives its development and commercial support.
Going forward, we can expect more features and smoother cloud integration for Valkey rather than Redis.
Pricing largely depends on the cloud provider or on the enterprise solution we’re buying. Generally speaking, Valkey is cheaper than Redis on Amazon AWS and Google Cloud.
For example, according to the AWS Pricing Calculator, a 3-node cache.r6g.8xlarge cluster (a fairly large one) costs 20% less with Valkey than with Redis — $6,419.33/month vs. $8,024.16/month, respectively, in the Ireland region.
Other AWS nodes or Google Cloud pricing have similar differences. For organizations running large clusters, this means massive savings each month.
Both Redis and Valkey offer the same basic set of features. For example, we can use either of them to implement a message queue (using the LPUSH
and LPOP
commands), a cache, and/or a NoSQL database (using the SET
and GET
commands, possibly with an expiration time).
At the time of writing, Valkey hasn’t introduced relevant improvements in the feature set it offers. The last releases have mainly focused on the internal implementation to improve performance (such as, as we saw above: enhanced memory efficiency and support for asynchronous I/O handling).
If you have more complex use cases, or if the data structures you’re working with are complex, make sure to test your applications with Valkey before committing to a migration. You can do that either at an infrastructural level (which is more expensive; see below), or by using Docker to spin up a disposable Valkey container on your local computer.
As we saw above, Valkey forked from Redis 7.2.4. Therefore, the first step of the migration should be to update our infrastructure/Redis clients to use Redis 7.2.4. This way we can test our applications with the latest Redis version before the fork.
After that, we can deploy a Valkey instance, which will act as a replacement for the existing Redis cluster. If our infrastructure hosts customer-facing applications, it is paramount that Valkey and Redis can coexist. This way, we can test our workloads without affecting the customers. Furthermore, we can export Redis data using the redis-cli save
command, which creates a .rdb
file. We can then import it into Valkey, possibly with a few adjustments in the data structures and configuration.
The actual validation of the Valkey instance largely depends on what we use Redis for. Generally speaking, we should verify that Valkey supports all the workflows we are using Redis for (e.g., the type of all the keys we use is supported).
The last step of the migration is deleting the old Redis cluster.
The migration process might be more challenging depending on your requirements. Some Redis features might not be available in Valkey (yet), and developers might need time to adapt to the new tool. Lastly, fine-tuning Valkey settings might take a while. Until then, the performance might not be optimal.
In this article, we analyzed both Valkey and Redis from different points of view. Since Valkey was announced (and forked from Redis), the hype in the community has been growing, and so has its usage among many companies.
The question “Should I migrate?” is a difficult one to answer. Based on the comparison above and considering Valkey’s backers, the immediate answer would probably be “Yes!” But be careful, because all that glitters is not gold. Valkey is still fairly new, and we don’t know much about its future.
Therefore, before committing to one side or the other, consider the following aspects:
Ask yourself all those questions, deliberate on the answers, and then decide whether or not to migrate. In any case, both solutions are robust and will help you handle large amounts of data efficiently in your applications.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to hide the scrollbar in popular web browsers by making use of modern CSS techniques, and dive into some interactive examples.
Discover 16 of the most useful React content libraries and kits, and learn how to use them in your next React app.
Choosing between TypeScript and JavaScript depends on your project’s complexity, team structure, and long-term goals.
Generate and validate UUIDs in Node.js using libraries like `uuid`, `short-uuid`, and `nanoid`, covering UUID versions and best practices.