In the last few years, JavaScript has gone through some major changes. The wider adoption of ES6 and the rise of modern frameworks such has shifted the front-end industry’s focus to a more declarative approach.
Imperative programming focuses on the commands for your computer to run. Declarative focuses on what you want from your computer. While an imperative approach can often be more performant by being closer to the metal, unless you are dealing with large datasets the advantage is likely negligible.
By manipulating and digesting your arrays in a declarative fashion, you can produce much more readable code.
Here are a few ways to do that.
.reduce
Perhaps the most powerful array method is .reduce
. It works by calling a provided function against each item of the array. This callback accepts up to four arguments (although I find myself usually only using the first two):
previousValue
, which is often referred to as the ‘accumulator’. This is the value returned the last time the callback was calledcurrentValue
, which is the current item in the arraycurrentIndex
, which is the index of the current item in the arrayarray
, which is the full array being traversedIn addition to this callback, the method accepts an optional initial value as the argument. If an initial value is not provided, the first value in the array will be used.
A very simple example is a reducer for getting the sum of a collection of numbers.
const numbers = [1,2,3,4,5]; const sum = numbers.reduce( (accumulator, currentValue) => accumulator + currentValue ); console.log(sum); // 15
The callback adds the currentValue
to the accumulator
. Since no initial value is provided, it begins with the first value in the array.
.map
.map
will similarly accept a callback to be called against each element in an array.
This callback accepts three arguments: currentValue
, currentIndex
, and the array
.
Rather than keeping track of an accumulator, the map method returns an array of equal length to the original. The callback function “maps” the value of the original array into the new array.
An example of a simple map callback is one that returns the square of each number.
const numbers = [1,2,3,4,5]; const squares = numbers.map(currentValue => currentValue * currentValue); console.log(squares); // [1,4,9,16,25];
.filter
.filter
accepts a callback with the same arguments as .map
. Rather than ‘transforming’ each value in the array like a .map
, the filter callback should return a ‘truthy’ or ‘falsey’ value. If the callback returns a truthy value, then that element will appear in the new array.
An example might be checking to see if a list of numbers is divisible by 3.
const numbers = [1,2,3,4,5,6,7,8,9]; const divisibleByThree = numbers.filter(currentValue => currentValue % 3 === 0); console.log(divisibleByThree); // [3,6,9];
This is perhaps the single biggest increase in readability for your array methods. By naming your array method callbacks, you get an instant increase in readability.
Compare these two:
const newEngland = [0,3,6,19,6]; const atlanta = [0,21,7,0,0]; const toScore = (accumulator, value) => accumulator + value; const atlantaScore = atlanta.reduce((accumulator, value) => accumulator + value); const newEnglandScore = newEngland.reduce(toScore); console.log(Math.max(newEnglandScore, atlantaScore));
By giving your callback a name, you can immediately get a better understanding of what the code is trying to accomplish. When naming, there are a couple things to keep in mind.
Be consistent. Have a good naming convention. I like to name all of my .reduce
and .map
callbacks as toWhatever
. If I am reducing an array of numbers to a sum, toSum
.
If I am mapping an array of user objects to names, toFullName
. When using .filter
, I like to name my callbacks as isWhatever
or isNotWhatever
. If I am filtering down to only items that are perfect squares, isPerfectSquare
.
Be concise. Your callback should theoretically only be doing one job — try and capture that job with a descriptive yet brief name.
Names like accumulator
and currentValue
are easy to reach for when authoring code — they are so generic that they are never wrong. Because they are so generic, however, they don’t help the reader of the code.
Extending this even further — if you are manipulating an array of objects and are only using a few values, it might be more readable to use object destructuring in the parameter list.
const cart = [ { name: 'Waterloo Sparkling Water', quantity: 4, price: 1, }, { name: 'High Brew Coffee', quantity: 2, price: 2, }, ]; const toTotal = (totalPrice, {quantity, price}) => totalPrice + quantity * price; const total = cart.reduce(toTotal, 0); console.log(total); // 8
Earlier I mentioned that .reduce
was perhaps the most powerful array method. That’s because, due to its concept of an accumulator, it is infinitely flexible in what it can return. A .map
must return an array of equal length to the original. A .filter
must return a subset of its original. With .reduce
you can do everything that .map
and .filter
does and more… so why not always use .reduce
?
You should use .map
and .filter
because of their limitation. A reader of your code will know when they see a .filter
that it will be returning a subset, but if they see a .reduce
they may need to look over the callback before knowing this. Use the most specific method for the job.
Most of the examples so far have been fairly contrived to show how each of these works. Here is an example that more closely resembles a real life scenario: taking an array of objects, similar to what you might receive from an API, and formatting them for consumption on your app.
In this case, let’s say that we are receiving a selection of nearby restaurants from an API.
const restaurants = [ { name: "Pizza Planet", cuisine: 'Pizza', hours: { open: 11, close: 22, }, }, { name: "JJ's Diner", cuisine: 'Breakfast', hours: { open: 7, close: 14, }, }, { name: "Bob's Burgers", cuisine: 'Burgers', hours: { open: 11, close: 21, }, }, { name: "Central Perk", cuisine: 'Coffee', hours: { open: 6, close: 20, }, }, { name: "Monks Cafe", cuisine: 'American', hours: { open: 6, close: 20, } }, ];
We want to digest (pun intended) this data by creating a list on our website of all nearby restaurants that are both currently open and serve food.
One method of achieving this is through a single large reducer.
const currentTime = 15; // 3:00 PM const toOpenRestaurants = (openRestaurants, restaurant) => { const { name, cuisine, hours: { open, close, } } = restaurant; const isOpen = currentTime > open && currentTime < close; const isFood = cuisine !== 'Coffee'; return isFood && isOpen ? [...openRestaurants, name] : openRestaurants; }; const openRestaurants = restaurants.reduce(toOpenRestaurants, []); console.log(openRestaurants); // ["Pizza Planet", "Bob's Burgers", "Monks Cafe"]
However, this reducer is doing three things: checking if open, checking if its a valid establishment (not coffee), and mapping to the name.
Here is the same functionality written with single-purpose callbacks.
const currentTime = 15; // 3:00 PM const isOpen = ({hours: {open, close} }) => currentTime > open && currentTime < close; const isFood = ({cuisine}) => cuisine !== 'Coffee'; const toName = ({name}) => name; const openRestaurants = restaurants .filter(isOpen) .filter(isFood) .map(toName) ; console.log(openRestaurants); // ["Pizza Planet", "Bob's Burgers", "Monks Cafe"]
There are some other advantages to splitting up your functionality into multiple callbacks. If the logic to any of your filters changes, you can easily isolate exactly where this change needs to occur. You can also reuse the functionality of certain callbacks elsewhere (for example, you can filter to isOpen
and isPizza
).
This method also makes for easier testing — you can write unit tests for all of your building blocks, and when adding new functionality and you simply reuse these blocks and not need to worry about anything breaking.
Imperative and declarative both have their place. If you are going through large amounts of data and every millisecond counts, stick with while
and for
loops. That is what is happening behind the scenes anyways.
I would argue in most cases, code readability (and therefore maintainability) is worth the tradeoff. By being intentional with how you use these callbacks, you can maximize that benefit.
Debugging code is always a tedious task. But the more you understand your errors, the easier it is to fix them.
LogRocket allows you to understand these errors in new and unique ways. Our frontend monitoring solution tracks user engagement with your JavaScript frontends to give you the ability to see exactly what the user did that led to an error.
LogRocket records console logs, page load times, stack traces, slow network requests/responses with headers + bodies, browser metadata, and custom logs. Understanding the impact of your JavaScript code will never be easier!
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]