What is eventual consistency? In distributed databases, eventual consistency ensures that all replicas of a database will hold the same data – but only after a delay. This delay occurs because updates are first applied to the primary database, and then asynchronously propagated to other replicas.
For example, consider an application with two database replicas: a primary and a secondary. When a user updates data (e.g., a profile), that update is written to the primary database first, and then asynchronously propagated to the secondary database.
However, if the frontend fetches data from the secondary database during this delay, it may return outdated information, leading to confusion or inconsistent user experiences.
Eventual consistency means data across systems isn’t synchronized instantly, causing temporary discrepancies. While acceptable in the backend, this delay can confuse users in frontend systems expecting immediate updates.
This article explores the impact of eventual consistency on frontend systems and practical strategies to address it. We’ll cover real-time updates using WebSockets, mirrored databases with Docker Compose, and provide code examples to improve consistency management in the frontend.
Frontend systems are typically designed to present users with the most up-to-date data. But when eventual consistency is in play, the data might temporarily be out-of-sync between replicas. Here are some key challenges eventual consistency can pose:
As developers, we must balance the system’s availability with the need for data consistency, ensuring that our applications don’t leave users in the dark or confused by inconsistent data.
When building a frontend that interacts with an eventually consistent system, you need strategies to mitigate user-facing issues. Here are some techniques:
One effective method for addressing eventual consistency is leveraging WebSockets to provide real-time updates. WebSockets enable the server to push updates to the client as soon as they happen, ensuring that the frontend gets notified when data becomes consistent across replicas.
To implement WebSockets, first, set up the WebSocket Server. Using Node.js and ws
, set up a WebSocket server that listens for connections and sends updates to the client:
const WebSocket = require('ws'); const wss = new WebSocket.Server({ port: 8080 }); wss.on('connection', (ws) => { console.log('Client connected'); ws.on('message', (message) => { console.log(`Received: ${message}`); }); }); module.exports = wss;
Then, implement the WebSocket Client. In your frontend, open a WebSocket connection to receive real-time updates. When the server pushes new data, the frontend can immediately update the UI:
const socket = new WebSocket('ws://localhost:8080'); socket.addEventListener('open', () => { console.log('Connected to server'); }); socket.addEventListener('message', (event) => { const updatedData = JSON.parse(event.data); // Update the UI with the new data updateUI(updatedData); }); 3. Push Data Updates (Backend) Once the primary and secondary databases are synchronized, the server can notify all connected clients that the update is complete. function notifyClients(updatedData) { wss.clients.forEach((client) => { if (client.readyState === WebSocket.OPEN) { client.send(JSON.stringify(updatedData)); } }); }
Using WebSockets allows the frontend to get updates immediately, reducing the inconsistency window and giving users a more accurate view of their data.
Optimistic UI updates assume that a user action will succeed and immediately reflect that change in the UI without waiting for the backend to confirm. This provides a snappy and responsive user experience. If the operation fails, the system can roll back the UI to its previous state.
In the following example example, we optimistically add a new post to a list of posts before the backend confirms the operation:
function submitPost(postData) { const tempId = Date.now(); const newPost = { id: tempId, content: postData.content }; // Optimistically update the UI setPosts([newPost, ...posts]); // Send data to server fetch('/api/posts', { method: 'POST', body: JSON.stringify(postData), }) .then((response) => response.json()) .then((serverData) => { // Update post with server response setPosts((prevPosts) => prevPosts.map((post) => post.id === tempId ? serverData : post ) ); }) .catch((error) => { // Rollback the optimistic UI update on failure setPosts((prevPosts) => prevPosts.filter((post) => post.id !== tempId)); showError(error.message); }); }
If the server call fails, we roll back the UI to its previous state. This prevents the user from seeing stale or incorrect data for too long.
Pros of Optimistic UI:
Cons of Optimistic UI:
In some cases, you may want to block the user from performing further actions until the update is confirmed. This approach is suitable for critical systems, such as financial applications, where consistency is more important than responsiveness.
To implement this strategy, we must first disable user actions in the frontend. In the following code, we disable a form submission button until the transaction is confirmed by the backend:
function submitTransaction(transactionData) { // Disable submit button to prevent further actions disableButton('submit'); fetch('/api/transaction', { method: 'POST', body: JSON.stringify(transactionData), }) .then((response) => response.json()) .then((result) => { // Enable button after successful transaction enableButton('submit'); updateTransactionUI(result); }) .catch((error) => { // Re-enable button and show error enableButton('submit'); showError(error.message); }); }
During the waiting period, you can display a loading indicator to signal to the user that the system is processing their request.
Pros of blocking user actions until confirmation:
Cons of blocking user actions until confirmation:
When multiple clients update the same data across different replicas, versioning helps track changes and detect conflicts. Each piece of data carries a version number that increments with each update. When a client submits an update, the system checks the version to see if it has changed since the client last fetched the data. If there’s a mismatch, a conflict resolution strategy is applied.
To implement this technique, we must first add version numbers to our data. When saving data, include a version number that increments with each update:
const profile = { id: 'user123', name: 'John Doe', version: 2, // Incremented with each update };
Then, before making changes, compare the current version of the data with the version on the server:
function saveProfile(updatedProfile) { fetch(`/api/profile/${updatedProfile.id}`) .then((response) => response.json()) .then((serverProfile) => { if (serverProfile.version === updatedProfile.version) { // Proceed with update sendUpdateToServer(updatedProfile); } else { // Handle conflict resolveConflict(updatedProfile, serverProfile); } }); }
Then, when handling conflicts, conflict resolution strategies may include:
Pros of versioning with conflict resolution:
Cons ​​of versioning with conflict resolution:
To simulate eventual consistency in a development environment, we can create a setup with two PostgreSQL databases — a primary and a replica — using Docker Compose. The replica will intentionally lag behind the primary to simulate the delay in propagating updates.
Here is the Docker Compose configuration:
version: '3.8' services: primary_db: image: postgres:13 environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: app_db ports: - "5432:5432" networks: - db_network volumes: - primary_data:/var/lib/postgresql/data replica_db: image: postgres:13 environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: app_db ports: - "5433:5432" networks: - db_network depends_on: - primary_db volumes: - replica_data:/var/lib/postgresql/data command: > sh -c "sleep 10 && pg_basebackup -h primary_db -D /var/lib/postgresql/data -U user -v -P --wal-method=stream" networks: db_network: driver: bridge volumes: primary_data: replica_data:
Here’s what’s happening in the code snippet above:
This Docker Compose file sets up two PostgreSQL services: a primary database (primary_db
) and a replica (replica_db
), both using PostgreSQL version 13:
primary_db
): The main database that handles all writesreplica_db
): A secondary database that receives updates from the primary after a short delay (simulated by sleep 10)The primary database is exposed on port 5432, while the replica is on port 5433. Both services use the same environment settings for the database user, password, and name, and they share a custom network (db_network
) for communication.
The primary database stores its data in a named volume (primary_data
), and the replica stores its data in a separate volume (replica_data
). The replica service waits for the primary to be ready before starting, and then it runs a command to copy data from the primary using pg_basebackup
and streaming replication to keep the data in sync. This setup provides a simple replication solution, ensuring the replica continuously mirrors the primary database for redundancy and data availability.
By introducing artificial lag, you can simulate the eventual consistency issue and test how your frontend handles stale data.
If you want to simulate real-world network latency between the primary and replica, you can use Linux’s tc
(traffic control) tool to introduce artificial delay between the two containers:
# Simulate 1000ms delay in the Docker container docker exec -it replica_db bash tc qdisc add dev eth0 root netem delay 1000ms
This command delays network traffic in the replica by 1000ms (one second), simulating the lag between updates.
Eventual consistency is an unavoidable challenge in distributed systems. While backend services can handle it gracefully, frontend systems must be designed with strategies to mitigate the effects of delayed data propagation.
Using techniques like WebSockets for real-time updates, optimistic UI updates, blocking user actions, and versioning with conflict resolution, you can build responsive frontend systems that handle eventual consistency effectively. By simulating eventual consistency in a controlled environment using Docker Compose, you can test and refine your solutions before deploying them to production.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowEfficient initializing is crucial to smooth-running websites. One way to optimize that process is through lazy initialization in Rust 1.80.
Design React Native UIs that look great on any device by using adaptive layouts, responsive scaling, and platform-specific tools.
Angular’s two-way data binding has evolved with signals, offering improved performance, simpler syntax, and better type inference.
Fix sticky positioning issues in CSS, from missing offsets to overflow conflicts in flex, grid, and container height constraints.