Praveen Durairaj Application Engineer @HasuraHQ. Tech Enthusiast. You can follow me on Twitter @PraveenWeb.

Full-stack observability: LogRocket, DataDog, Honeycomb

7 min read 1968

Full-Stack Observability

In this post, we will set up an observable system for a React/Redux app and a Node.js back end deployed with Docker across the stack and see how it helps us identify and fix problems.

We will cover:

  • LogRocket with React/Redux integration to track front-end application
  • DataDog with Docker integration to monitor uptime/threshold-based alerts and Node.js APM
  • Honeycomb with back-end server logs integration for error responses

Getting started with observability

Observability is a measure of how well we can understand a system from the work that it does. It also helps to explain why a system is behaving a certain way.

Observability is often confused with monitoring. Monitoring is a way to identify known issues, while observability lets you know why an issue was there in the first place.

Traditionally, apps were server-side rendered and the server was more or less set up and controlled by the application developer. So, monitoring was focussed on the server. Now with client-side javascript apps (React, Vue, Angular, etc.) and serverless back ends, things are changing with respect to how errors can be identified and handled.

We cannot predict when or which part of the application will fail. Therefore, at any point in time, we need a way to quickly get visibility into how the app is performing and handling errors.

The need for context

We need context across the full stack  —  front end, back end, and infrastructure. It’s not just a matter of the code we write, but also the underlying infrastructure. It too can go wrong in many ways. There simply must be a quick way out.

To determine what went wrong with an app, we need as much data as possible about the entire stack to finalize the solution. By design, an observable system should emit metrics/logs/traces.

React with LogRocket

LogRocket helps you understand user behavior by capturing logs and session recordings from your app.

Error tracking

Despite all the end-to-end testing that goes into production builds, error tracking is still important. Post error tracking, you need to have an easy way to:

  • Identify which section of users it is affecting
  • Reproduce the same errors in a development setup
  • Filter out the noise to get a quick overview of what’s happening

LogRocket helps you do all the above easily.

React integration

LogRocket comes with built-in integrations for popular state management libraries like Redux, MobX, and Vuex. In this example, I’m going to use the Redux integration. The setup process is pretty simple. Start off with signing up on

  • Install with npm: npm install — save logrocket
  • Initialize LogRocket in your app:
    import LogRocket from 'logrocket';
    // Initialize LogRocket with your app ID
  • Add the Redux middleware:
    import { applyMiddleware, createStore } from 'redux';
    const store = createStore(
      reducer, // your app reducer
      applyMiddleware(middlewares, LogRocket.reduxMiddleware()),

As soon as the integration is set up, LogRocket will start receiving events and you can replay them on your dashboard.

LogRocket Redux Replay

Identify users

We also need to tag each event to a user. So once a user is authenticated, it’s just a matter of triggering the identify method.

LogRocket.identify('<user_id>', {
  name: '<name>',
  email: '<email>',

  // Add your own custom user variables here, ie:
  <custom_type>: '<custom_value'

Great! Now we have the basic system up and running to collect data from the front-end app.

Tracking bad deployment

With React applications (or any JavaScript framework for that matter), unless used with TypeScript, the likelihood of uncaught exceptions coming into the app at runtime is high.

For example, let’s look at the case of a bad accessing of a list.

  .then(function(response) {
    // some data processing
    const first_todo = response.todos[0].text;
    return first_todo;
  .then(function(myJson) {
LogRocket capturing uncaught exceptions

In the above example, some code was written assuming that the response from the back end will always return a list of todos with its corresponding metadata.

In this case, the server returned an empty array and resulted in an error.

Anything can go wrong with assumptions. The ability to identify exceptions post-deployment is crucial for quick recoveries. Now, the action is to add a try catch block and handle the case of empty responses carefully.

Uptime monitoring and tracing with DataDog

DataDog seamlessly aggregates metrics and events across the full DevOps stack. Assuming a typical microservice architecture, the React app is Dockerized for deployment in any cloud provider.

Apart from the cloud provider’s guarantee of the uptime, your Docker container running inside can have its own downtime due to a number of reasons.

It could be as simple as a badly coded server causing a crash to as complicated as high usage of CPU/RAM causing slow responses. In these cases, DataDog comes to the rescue by collecting as much information as possible about your deployment.

Docker integration

Similar to LogRocket, DataDog comes with built-in integrations for the DevOps stack. In this post, we are covering a typical Docker deployment. Head over to installation instructions of DataDog Agent on Docker.

docker run -d --name dd-agent -v /var/run/docker.sock:/var/run/docker.sock:ro -v /proc/:/host/proc/:ro -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro -e DD_API_KEY=<your-api-key> datadog/agent:latest

Note: replace <api-key> with your DataDog API Key.

Track if the server is up by using docker ps.

Now we can set up a dashboard to observe different metrics of the underlying cloud cluster where the application’s docker container is running.

DataDog metrics dashboard

Threshold alerts

We have to measure a few metrics to track if the Docker container is running the way it should. Some of the metrics to collect are:

  • CPU usage
  • RAM usage
  • Docker container running status
  • System uptime

There are a whole lot of other metrics that can be collected. But the above can give us a pretty good sense of what’s going on.

DataDog CPU RAM Usage Metrics


DataDog gives you the ability to create monitors that actively check metrics, integration availability, network endpoints, and more. Once a monitor is created, you are notified when its threshold conditions are met.

You can set up monitors when a metric (like response time, CPU/RAM usage) crosses a threshold and can be used to trigger alerts to you and your team.

The actual use case of these monitors kicks in when, for example, you are making an unoptimized database query. This takes a lot of CPU to process and becomes a bottleneck to your other microservices in the same cluster.

DataDog Monitor Alerts

APM — dd-trace

Tracing brings the benefits of visibility when the application stack grows into a multi-container setup in a microservice architecture. A web/mobile client contacts different services for different use cases and it quickly gets complex to handle.

Let’s look at a simple Node.js express integration with dd-trace to see how it helps.

Enable APM in DataDog Agent running on the same host as your Node.js Docker container.

docker run -d -v /var/run/docker.sock:/var/run/docker.sock:ro \
              -v /proc/:/host/proc/:ro \
              -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
              -e DD_API_KEY=<YOUR_API_KEY> \
              -e DD_APM_ENABLED=true \

Place the below code at the top of your Node.js server to start instrumenting code.

var tracer = require('dd-trace').init({
  hostname: '<host-name>',
  port: 8126,
  debug: true

With this setup, you can now trace request/response on your back-end server.

DataDog APM Trace

We can also look at the graph of the number of req/s, latency, errors, total time spend, etc. What this tells us is that, if there is a sudden increase in any of these metrics, we can trace an individual request to see what is going wrong.

A good example would be to trace the call made to the database to fetch a large amount of data or an async API call to see how long it takes to respond.

If there are multiple requests branching out on an endpoint, you can uniquely ID each request. Trace data can guide on your development to optimize parts of the codebase. Having visibility into your processes makes a considerable difference when optimizing for those milliseconds as you scale your application.

DataDog Trace Metrics

Visualize with Honeycomb

When there’s so much data to be analyzed, it’s generally good to zoom out and get a visual representation of what’s going on with the application. While LogRocket helps you capture errors in your front end, Honeycomb can be set up to capture metrics from the back end.

Honeycomb integrations are fast as it is environment agnostic and it spits out JSON and allows processing with JSON.

Node.js integration

Get started by installing Honeycomb’s Node.js integration:

npm install honeycomb-beeline — save

Then, place the below code snippet at the top of your Node.js server code.

    writeKey: "<your-write-key>",
    dataset: "node-js"
    // ... additional optional configuration ...

The Node app had a few endpoints. Honeycomb automatically parses these request metadata into a nice schema which can then be queried upon including payload. We can also apply filters, break down the data by different parameters.

For example, if we want to group by the number of requests sent to a particular URL, we can apply the breakdown on that parameter and run the query.

Honeycomb Dashboard

Flexible schema

More often than not, most tooling will have a rigid schema that restricts the developer from asking more questions. Honeycomb has a flexible schema and allows you to put arbitrary data as long as its JSON.

Pro tip: Honeycomb lets you manage the schema. What this allows you to do is assign specific data types to particular columns (if type conversion is allowed, of course) and therefore you can apply a bunch of comparison operators like >, <, >=, <= and =.

Collaborative data analysis

With the team-based approach, Honeycomb lets developers in a team, analyze if their application is working fine. Anybody in the team can query to see if their recent deployments are working fine based on response times and status codes.

With fine-grained data, time-based graphical visualization helps developers to keep asking different questions until they pinpoint the original problem.

Honeycomb data is real real-time and aggregates data at read time.

Recently, Honeycomb also introduced Tracing to get deeper insights like an overview of which methods are slow, the API request structure, and sync vs async decisions to parallelize certain requests. Here’s a request tracing example taken from Honeycomb.

Honeycomb request tracing

Adding safeguards

We can bare minimum follow the best practices across the stack to minimize what can go wrong in production. Basically prudent error handling, stable infrastructure provider and up to date tooling will ensure basic safeguard for your application. This will be an ongoing process as libraries/tools keep getting updated and there has to be a consistent deployment of such updates to your application.


The idea is to leverage built-in integrations of all the tools. Plugin integration for common frameworks/libraries across the stack is pretty good in the above toolset.

In a complex and distributed system, it is imperative that there is a high degree of observability. Obviously, some decisions can be automated, like restarting your Docker container because of your cloud vendor’s downtime or handling errors in front-end client.

True, observability comes at a cost, but it helps solve mission-critical business issues at a faster rate as the application/business scales.

Get setup with LogRocket's modern React error tracking in minutes:

  1. Visit to get an app ID.
  2. Install LogRocket via NPM or script tag. LogRocket.init() must be called client-side, not server-side.
  3. $ npm i --save logrocket 

    // Code:

    import LogRocket from 'logrocket';
    Add to your HTML:

    <script src=""></script>
    <script>window.LogRocket && window.LogRocket.init('app/id');</script>
  4. (Optional) Install plugins for deeper integrations with your stack:
    • Redux middleware
    • ngrx middleware
    • Vuex plugin
Get started now
Praveen Durairaj Application Engineer @HasuraHQ. Tech Enthusiast. You can follow me on Twitter @PraveenWeb.

Leave a Reply