Spending hours, or even days, trying to fix an obscure bug is frustrating and unproductive. Eventually, you’ll end up staring at the screen waiting for an eureka moment to magically happen.
But what if instead of waiting for the solution to magically come to you, you had the super power to systematically trace any edge-case bug that you’re dealing with?
You can — with logging. When used correctly, logging can give you the necessary insights into your application so you can figure out exactly what happened. Proper logging can be the difference between a lousy dump of debug statements and a powerful debugging tool that helps you find bugs easier and fix them faster.
In logging, it’s important to have an output that’s both easy to read by humans and parseable by machines. For us developers, it’s important that when we look and inspect the logs that we can make sense of it. Machines need to be able to parse the logs so we can run advanced queries and perform fancy aggregations.
JSON is a format that fits both criteria, which is why a logging library parses the output into valid JSON and makes sure your logs are always formatted properly.
You may get by using console.log
in hobby projects. In production-grade Node.js applications, however, it’s often useful to be able to distinguish between different log levels.
Enter logging libraries, which also allow you to turn logging on and off at different levels. In a production environment you’d typically want to have errors and perhaps warnings on, but in a staging environment debug/more verbose logs are useful as well, which would otherwise add too much noise in production.
Pino is a popular logging library in the Node.js ecosystem. It’s fast and has minimal overhead.
To install Pino in your Node.js project, run:
npm install pino
Create a file named logger.js
. In this file, we will import Pino, configure it, and then export a logging instance to be used throughout the project:
// logger.js const pino = require('pino'); // Create a logging instance const logger = pino({ level: process.env.NODE_ENV === 'production' ? 'info' : 'debug', }); module.exports.logger = logger;
Using Pino is relatively straightforward. You can log messages at different log levels (debug, info, warn, error, etc.) using the similarly named methods. You can also pass in objects and/or errors for additional context:
const { logger } = require('./logger.js'); // Log a simple message at the "info" level logger.info('Application started!'); // Outputs: // {"level":30,"time":1608568334356,"pid":67017,"hostname":"Maxims-MacBook-Pro.local","msg":"Application started!"} // Log an object in addition to a message to supply context const user = { firstName: 'Maxim', lastName: 'Orlov' }; logger.info(user, 'User successfully authenticated'); // Outputs: // {"level":30,"time":1608568334356,"pid":67017,"hostname":"Maxims-MacBook-Pro.local","firstName":"Maxim","lastName":"Orlov","msg":"User successfully authenticated"} // Log an error at the "error" level const error = new Error('Database crashed!'); logger.error(error, 'Failed to fetch user'); // Outputs: // {"level":50,"time":1608568334356,"pid":67017,"hostname":"Maxims-MacBook-Pro.local","stack":"Error: Database crashed!\n at Object.<anonymous> (/Users/maxim/Code/playground/test.js:19:15)\n at Module._compile (node:internal/modules/cjs/loader:1102:14)\n at Object.Module._extensions..js (node:internal/modules/cjs/loader:1131:10)\n at Module.load (node:internal/modules/cjs/loader:967:32)\n at Function.Module._load (node:internal/modules/cjs/loader:807:14)\n at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12)\n at node:internal/main/run_main_module:17:47","type":"Error","msg":"Failed to fetch user"}
To integrate Pino with an existing Node.js application, you simply import the logger instance and use it throughout your project. If your project is using console.log
, you can do a project wide find and replace with logger.info
. Just make sure you import the logger instance at the top of the file.
Additionally, you can spend some time categorizing your logs into different log levels. This will allow you to distinguish log messages at different severity levels, and you can silence logs below a certain log level for specific environments, such as debug logs in production.
As a bare minimum, I use the “info” and “error” log levels in my projects so I can easily distinguish error logs from normal operation logs.
Here’s an example Node.js application running an Express web server with a single endpoint to fetch a user:
// server.js const express = require('express'); const { logger } = require('./logger.js'); const db = require('./db.js'); const PORT = process.env.PORT || 3000; const app = express(); app.get('/users/:id', async (req, res) => { let userId = req.params.id; if (isNaN(userId)) { logger.warn({ userId }, 'Invalid user ID'); return res.status(400).send('Invalid user ID'); } else { userId = Number(userId); } try { logger.info({ userId }, 'Fetching user from DB'); const user = await db.getUser({ userId }); if (!user) { logger.warn({ userId }, 'User not found'); return res.status(404).send('User not found'); } logger.debug({ user }, 'User found, sending to client'); return res.status(200).json(user); } catch (error) { logger.error(error, 'Failed to fetch user from DB'); return res.status(500).send('An error occurred while fetching user'); } }); app.listen(PORT, () => { logger.info(`Server listening at http://localhost:${PORT}`); });
Notice the different log levels (debug, info, warn, and error) used in this example. I tend to log at “warn” severity when something occurred that’s neither a normal operation nor an error, so something in between.
Debug logs are disabled by default in Pino, so I tend to use them for logs that would add too much noise in production but might be useful in staging or local development during debugging.
Node.js is a single-threaded language and therefore it uses the event loop to handle simultaneous asynchronous tasks. While this makes Node.js very fast at serving web request, the downside is that the function stack and context is lost in the process.
AsyncLocalStorage class is part of the async_hooks
module. It’s a relatively new Node.js API that lets you store data within callback functions and asynchronous operations.
To use it, you create a new class instance and call the run
method by passing two arguments: the store and a callback function.
The store can be anything from a simple integer or string to a complex object or map. The callback function, passed in as the second argument, will be executed in the context of the store. To access the store, call the getStore
method on the instance.
Let’s see how that works:
const { AsyncLocalStorage } = require('async_hooks'); // Create a new context const context = new AsyncLocalStorage(); function doSomethingAsync() { // Use setImmediate to mimic async execution setImmediate(() => { const store = context.getStore(); store.get('name'); // Maxim }); } function main() { const store = new Map(); store.set('name', 'Maxim'); // Run the callback function (second argument) inside a context with `store` as data context.run(store, () => { doSomethingAsync(); }); // This is outside of the context context.getStore(); // undefined } main();
Keep in mind the store is only accessible from within the callback function and any of its children. Anywhere outside, the getStore
method will return undefined
.
This example uses just one nesting level and both functions are in the same file, so the benefit may not seem very obvious. However, in a larger project where the function stack is several layers deep and spread across different files, you can imagine the usefulness of having access to a thread-local store from anywhere.
Storing and retrieving data at the stack level has several use cases. One use case is associating logs with web requests.
When faced with an error log in a production application, it’s extremely helpful to be able to see all other logs that were part of the same request/response cycle. This enables you to trace the request as it traveled through your application, so you can gather the conditions and variables that led to a specific bug.
In order to do that, we need to assign some sort of a unique identifier to each request. A Universally Unique ID (UUID) is what we’re looking for, and the uuid
library gives us just that. More specifically, we’ll generate version 4 UUIDs for each request.
Let’s expand on our example Node.js application from above. First, we’ll create a module that exports an AsyncLocalStorage instance:
// async-context.js const { AsyncLocalStorage } = require('async_hooks'); const context = new AsyncLocalStorage(); module.exports = context;
Next, we’ll expand logger.js
file with an Express middleware function that creates a child logger with a unique request ID and adds it to the context store:
// logger.js const pino = require('pino'); const uuid = require('uuid'); const context = require('./async-context.js'); // Create a logging instance const logger = pino({ level: process.env.NODE_ENV === 'production' ? 'info' : 'debug', }); // Proxify logger instance to use child logger from context if it exists module.exports.logger = new Proxy(logger, { get(target, property, receiver) { target = context.getStore()?.get('logger') || target; return Reflect.get(target, property, receiver); }, }); // Generate a unique ID for each incoming request and store a child logger in context // to always log the request ID module.exports.contextMiddleware = (req, res, next) => { const child = logger.child({ requestId: uuid.v4() }); const store = new Map(); store.set('logger', child); return context.run(store, next); };
Child loggers are a way to add state to a logger. The properties you create the child logger with are outputted on every log line when that child logger is used. Child loggers are an excellent use case for adding a request ID to logs.
Additionally, we also use a Proxy to modify the logging instance to log using the child logger instance if it exists.
Finally, let’s use the middleware function in server.js
:
// server.js const express = require('express'); const { logger, contextMiddleware } = require('./logger.js'); const db = require('./db.js'); const PORT = process.env.PORT || 3000; const app = express(); // Attach a unique request ID to every log line app.use(contextMiddleware); app.get('/users/:id', async (req, res) => { let userId = req.params.id; if (isNaN(userId)) { logger.warn({ userId }, 'Invalid user ID'); return res.status(400).send('Invalid user ID'); } else { userId = Number(userId); } try { logger.info({ userId }, 'Fetching user from DB'); const user = await db.getUser({ userId }); if (!user) { logger.warn({ userId }, 'User not found'); return res.status(404).send('User not found'); } logger.debug({ user }, 'User found, sending to client'); return res.status(200).json(user); } catch (error) { logger.error(error, 'Failed to fetch user from DB'); return res.status(500).send('An error occurred while fetching user'); } }); app.listen(PORT, () => { logger.info(`Server listening at http://localhost:${PORT}`); });
That’s it! We now have a unique request ID attached to every log line.
{...,"requestId":"da672623-818b-4b18-89ca-7eb073accbfe","userId":1,"msg":"Fetching user from DB"} {...,"requestId":"da672623-818b-4b18-89ca-7eb073accbfe","user":{...},"msg":"User found, sending to client"} {...,"requestId":"01107c17-d3c8-4e20-b1ed-165e279a9f75","userId":2,"msg":"Fetching user from DB"} {...,"requestId":"01107c17-d3c8-4e20-b1ed-165e279a9f75","user":{...},"msg":"User found, sending to client"}
You can find a complete working example in this repository on Github.
In addition to the request ID, you might also find it useful to include other data to every log such as user ID and/or email. A good place to do that would be in authentication middleware right after a request is authenticated.
Now you have everything in place to follow the breadcrumbs of every obscure bug you will encounter.
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]
One Reply to "Logging with Pino and AsyncLocalStorage in Node.js"
You can use pino’s mixin() for more efficient and cleaner implementation over Proxy.