This tutorial implements an advanced caching layer in a Node.js application using Valkey, a high-performance, Redis-compatible in-memory datastore. We’ll explore layered caching, cache invalidation, and key namespacing in a modular architecture built with Express. The setup assumes a multi-service application where API responses need to be cached with flexibility, precision, and expiration control.
Redis Labs made a shift by relicensing Redis under a more restrictive model. For a tool that was long seen as a symbol of open-source excellence, this move shook the trust of many developers and infrastructure teams who had relied on Redis for over a decade. As a result, Valkey was born as a fully open-source fork of Redis, backed by AWS, Google, and Oracle, created by and for the community.
In web apps, frontend performance often hinges on how quickly and consistently backend data is delivered. Advanced caching with Valkey can be a game-changer when your frontend frequently depends on dynamic data, like user preferences, feature flags, or personalized content that doesn’t change often, but is costly to fetch or compute in real time.
This tutorial could be useful if you’re:
…..and many other use cases. Let’s dive in!
Valkey is compatible with the Redis protocol, so any Redis client works. We’ll use ioredis
for its support for advanced commands and cluster management:
docker run -d --name valkey -p 6379:6379 valkey/valkey npm install ioredis express
Create a new file lib/cache.js
for the Valkey client instance:
// lib/cache.js const Redis = require('ioredis'); const redis = new Redis({ host: '127.0.0.1', port: 6379, // retry strategy or auth config can be added here }); module.exports = redis;
This abstraction allows us to reuse the redis
instance across services and set up test mocking easily.
This section builds a general-purpose cache utility that provides structured key generation, TTL support, and selective invalidation through namespacing. This is designed for scenarios where multiple services or domains share the same Valkey instance but require isolation and control over their respective cache entries.
You’d typically need layered caching with namespaces and TTL when building applications that handle multiple distinct types of data with different freshness requirements. For example, consider an ecommerce platform with product catalogs, user profiles, and real-time order statuses. Each of these domains has its own caching needs.
The module supports four operations: get
, set
, del
, and clearNamespace
. All cache keys follow the pattern <namespace>:<id>
, enabling fast lookups and scoped invalidation. TTLs ensure stale data is evicted automatically, even without explicit deletion.
Create lib/cacheUtil.js
and add the following implementation:
// lib/cacheUtil.js const redis = require('./cache'); /** * Generates a fully qualified cache key using a namespace and unique identifier. * This avoids collisions between unrelated data types stored in the same Valkey instance. */ function getCacheKey(namespace, id) { return `${namespace}:${id}`; } /** * Retrieves a cached value by namespace and id. * Returns null if the key is missing or the JSON parsing fails. */ async function get(namespace, id) { const key = getCacheKey(namespace, id); const data = await redis.get(key); if (!data) return null; try { return JSON.parse(data); } catch (err) { // Optionally log or handle JSON parse errors if data is corrupted return null; } } /** * Caches a value under a given namespace and id with a configurable TTL. * TTL is specified in seconds. Default is 60 seconds. */ async function set(namespace, id, value, ttl = 60) { const key = getCacheKey(namespace, id); const json = JSON.stringify(value); await redis.set(key, json, 'EX', ttl); } /** * Deletes a specific cache entry by namespace and id. * Used during data mutations to remove stale entries. */ async function del(namespace, id) { const key = getCacheKey(namespace, id); await redis.del(key); } /** * Deletes all cache keys that belong to a given namespace. * This is useful for bulk invalidation, e.g. clearing all product caches after a price update. * Internally uses the KEYS command, which should be used cautiously in large datasets. */ async function clearNamespace(namespace) { const pattern = `${namespace}:*`; const keys = await redis.keys(pattern); if (keys.length > 0) { await redis.del(...keys); } } module.exports = { get, set, del, clearNamespace };
This utility provides a clean and extendable interface to caching logic across the application. For example, if your application caches user profiles with IDs like user:123
, and you later update the user’s information, you can invalidate that specific cache entry without touching others:
await cache.del('user', '123'); // removes only the cache for user 123
The clearNamespace
function is especially useful for scenarios like data migrations or batch updates. For instance, after updating all product prices, calling clearNamespace('product')
ensures that no outdated product data remains in cache.
TTL support ensures automatic expiration, which is critical in distributed systems where cache invalidation may occasionally fail. You can vary TTL per use case—for example, short TTLs for volatile data like pricing, and longer TTLs for stable data like country codes or feature flags.
This utility abstracts away the repetitive boilerplate of JSON handling, key formatting, and TTL logic, keeping the rest of your code focused on business logic instead of cache mechanics.
This section introduces a middleware function for Express that adds route-level caching with Valkey. It enables automatic response caching for any GET
endpoint, storing and retrieving serialized JSON payloads under keys based on the original request URL.
You’d typically use middleware-based caching like this when building a content platform that serves blog posts or articles to a large audience. For example, when users visit popular posts or browse category pages, those requests usually hit the same data repeatedly. Instead of querying the database every time someone loads a post, the middleware caches the full JSON response for that route, making repeat visits lightning-fast.
This approach eliminates repetitive cache logic in each route handler, while keeping fine-grained control via namespace segmentation and TTL configuration.
Create a file middleware/cacheMiddleware.js
:
// middleware/cacheMiddleware.js const cache = require('../lib/cacheUtil'); function cacheMiddleware(namespace, ttl = 60) { return async (req, res, next) => { const cacheKey = req.originalUrl; const cached = await cache.get(namespace, cacheKey); if (cached) { return res.json({ data: cached, cached: true }); } res.sendJson = res.json; res.json = async (body) => { await cache.set(namespace, cacheKey, body, ttl); res.sendJson({ data: body, cached: false }); }; next(); }; } module.exports = cacheMiddleware;
This approach intercepts the response and stores it in Valkey if it’s not already cached. The namespace
ensures isolation between different route groups.
Assume we’re fetching product data from a database. Create the following route in routes/products.js
:
// routes/products.js const express = require('express'); const router = express.Router(); const cacheMiddleware = require('../middleware/cacheMiddleware'); // Mock DB call async function getProductFromDB(id) { await new Promise((r) => setTimeout(r, 100)); // simulate latency return { id, name: `Product ${id}`, price: Math.random() * 100 }; } router.get('/:id', cacheMiddleware('product', 120), async (req, res) => { const product = await getProductFromDB(req.params.id); res.json(product); }); module.exports = router;
Integrate this into the main server:
// server.js const express = require('express'); const app = express(); const productRoutes = require('./routes/products'); app.use('/api/products', productRoutes); app.listen(3000, () => console.log('Server running on port 3000'));
Every GET
request to /api/products/:id
first checks Valkey. If not present, it fetches from the DB and caches the result.
Invalidate stale cache entries whenever the data source changes. Add an update route in the same file:
// routes/products.js (continued) const cache = require('../lib/cacheUtil'); router.put('/:id', async (req, res) => { const updated = { id: req.params.id, ...req.body }; // Assume database update here await cache.del('product', `/api/products/${req.params.id}`); res.json({ updated, invalidated: true }); });
This clears only the affected product’s cache entry. Use clearNamespace
if all entries for a model need to be reset, such as after bulk imports.
To run the examples from here, you should have the following structure:
valkey-caching-demo/ ├── lib/ │ ├── cache.js │ └── cacheUtil.js ├── middleware/ │ └── cacheMiddleware.js ├── routes/ │ └── products.js └── server.js
Start the server using:
node server.js
You should see:
Server running on port 3000
Use [curl](https://blog.logrocket.com/curl-measure-rtt/)
to test the caching behavior. Fetch product with ID 1
(this will simulate a DB call and then cache the response):
curl http://localhost:3000/api/products/1
The response will look like:
{ "data": { "id": "1", "name": "Product 1", "price": 47.38 }, "cached": false }
Repeat the same request (this time it’s served from Valkey cache):
curl http://localhost:3000/api/products/1 { "data": { "id": "1", "name": "Product 1", "price": 47.38 }, "cached": true }
Update the product and trigger cache invalidation:
curl -X PUT http://localhost:3000/api/products/1 \ -H "Content-Type: application/json" \ -d '{"name": "Updated Product 1"}'
The next GET
request will re-fetch and re-cache the updated data.
The application tests route-level caching with TTL, cache retrieval, and invalidation using Valkey in a Node.js Express setup. It verifies that GET
requests are cached, PUT
requests clear stale data, and key namespacing isolates cache entries.
In distributed deployments with multiple Node.js instances (e.g., behind a load balancer), a local cache invalidation affects only the instance where the mutation occurs. Other instances retain stale cache entries unless a coordinated invalidation mechanism is in place.
Valkey’s native publish/subscribe system provides a lightweight solution for broadcasting cache invalidation events across all running instances. Each instance subscribes to a shared channel and listens for invalidation messages. When a message is received, the instance deletes the corresponding cache entry from its local Valkey connection.
Update the Valkey client in lib/cache.js
to include three connections:
sub
) listening for invalidation eventspub
) for emitting invalidation events:// lib/cache.jsconst Redis = require('ioredis'); const redis = new Redis(); // Main connection const sub = new Redis(); // Subscriber connection const pub = new Redis(); // Publisher connection // Subscribe to invalidation events sub.subscribe('cache-invalidate'); // Listen for messages and delete matching cache keys sub.on('message', async (channel, message) => { if (channel === 'cache-invalidate') { try { const payload = JSON.parse(message); const { namespace, key } = payload; const fullKey = `${namespace}:${key}`; await redis.del(fullKey); console.log(`Cache invalidated: ${fullKey}`); } catch (err) { console.error('Invalidation message parse error:', err); } } }); // Used in API handlers to trigger invalidation function publishInvalidation(namespace, key) { const message = JSON.stringify({ namespace, key }); pub.publish('cache-invalidate', message); } module.exports = { redis, publishInvalidation };
Using a JSON structure { namespace, key }
instead of raw strings like 'product:/api/products/123'
avoids parsing ambiguity and makes it easier to extend the message format later (e.g., include invalidateAll: true
).
PUT
route to broadcast invalidationUpdate your product update handler in routes/products.js
to notify all application instances when a product is updated:
// routes/products.js (inside router.put) const { publishInvalidation } = require('../lib/cache'); router.put('/:id', async (req, res) => { const updated = { id: req.params.id, ...req.body }; // Simulate DB update here const cacheKey = `/api/products/${req.params.id}`; // Local invalidation (redundant, but fast) await cache.del('product', cacheKey); // Cross-instance invalidation publishInvalidation('product', cacheKey); res.json({ updated, invalidated: true }); });
Each instance will receive the cache-invalidate
message and delete its corresponding cache entry, ensuring all environments stay in sync.
You can simulate a distributed environment using two terminal sessions:
# Terminal 1 PORT=3000 node server.js # Terminal 2 PORT=3001 node server.js
Both instances will connect to the same Valkey server. When a PUT
request is sent to one instance, both will respond to the pub/sub invalidation:
curl -X PUT http://localhost:3000/api/products/1 \ -H "Content-Type: application/json" \ -d '{"name": "Updated Product"}'
If you add a console.log()
in the sub.on('message')
handler, both terminals will log the cache key deletion.
This implementation creates a modular Valkey-based caching layer for a Node.js application. It supports route-level middleware caching with TTL, namespace-based key management, and automatic invalidation during mutations. Pub/sub support ensures consistency in horizontally scaled deployments. This structure gives fine-grained control over cache behavior while remaining adaptable to complex service topologies.
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket lets you replay user sessions, eliminating guesswork around why bugs happen by showing exactly what users experienced. It captures console logs, errors, network requests, and pixel-perfect DOM recordings — compatible with all frameworks.
LogRocket's Galileo AI watches sessions for you, instantly identifying and explaining user struggles with automated monitoring of your entire product experience.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up now<selectedcontent>
element improves dropdowns
is an experimental HTML element that gives developers control over how a selected option is displayed, using just HTML and CSS.
Learn how to properly handle rejected promises in TypeScript using Angular, with tips for retry logic, typed results, and avoiding unhandled exceptions.
AI’s not just following orders anymore. If you’re building the frontend, here’s how to design interfaces that actually understand your agent’s smarts.
Apple Intelligence is here. What does it mean for frontend dev and UX? Explore the core features of the update, do’s and don’ts for designing with Apple Intelligence in mind, and reflect on the future of AI design.