(and how to implement them)

When working with functional programming a good rule of thumb is to always create new objects instead of changing old ones. In doing so we can be sure that our meddling with the object’s structure won’t affect some seemingly unrelated part of the application, which in turn makes the entire code more predictable.
How exactly can we be sure that the changes we make to an object do not affect the code elsewhere? Removing the unwanted references altogether seems like a good idea. To get rid of a reference we need to copy all of the object’s properties to a new object. There are many ways to do this and each of them yields a slightly different result. We are going to take a look at the most popular ones: shallow copy, deep copy, merging and assigning.
For every method we analyze, we will look at two different variations — each having a mildly different outcome. Also, on top of listing the pros and cons of every approach, we are going to compare these variations in terms of their performance. I am also going to provide links to the production-ready equivalents to use in an actual, real-life application.
If you wish to see the entire code of a given solution just click on a title. The link will redirect you to the Github repository.

1. Shallow copy
To shallow copy, an object means to simply create a new object with the exact same set of properties. We call the copy shallow because the properties in the target object can still hold references to those in the source object.
Before we get going with the implementation, however, let’s first write some tests, so that later we can check if everything is working as expected.
Tests
const testShallow = (shallowFn: ShallowFn) => { const obj1 = { prop1: true, prop2: { prop3: true } } const copiedObj1 = shallowFn(obj1) expect(copiedObj1).not.toBe(obj1) expect(copiedObj1.prop2).toBe(obj1.prop2) expect(copiedObj1).toEqual(obj1) } describe('shallow v1 (spread operator)', () => { it('copies an object shallowly', () => { return testShallow(shallowv1) }) }) describe('shallow v2 (copy props)', () => { it('copies an object shallowly', () => { return testShallow(shallowv2) }) })
Version 1
In this version, we are going to copy the object using the spread operator.
function shallow<T extends object>(source: T): T { return { ...source, } }
Version 2
Here we create a new object and copy every property from the source
object.
function shallow<T extends object>(source: T): T { const copy = {} as T Object.keys(source).forEach((key) => { copy[key] = source[key] }) return copy }
Performance test
As we can see, the first version with the spread operator is faster. This is likely due to the spread operator having been optimized for this use specifically.
Click here to run the tests yourself.
When to use
Shallow copying should be used whenever we want to lose a reference to the source
object but hardly care about references to any nested properties, e.g. when returning from a function.
Production-ready equivalent
2. Deep copy
When we make a deep copy we create a completely new object which holds no references to the original.
Tests
const testDeep = (deepFn: DeepFn) => { const obj1 = { one: true } expect(deepFn(obj1)).not.toBe(obj1) const obj2 = { prop1: { prop2: { prop3: { prop: true, }, prop4: [1, 2, 3, 4, 5], }, }, } const copiedObj2 = deepFn(obj2) expect(copiedObj2).not.toBe(obj2) expect(copiedObj2.prop1.prop2.prop4).not.toBe(obj2.prop1.prop2.prop4) expect(copiedObj2).toEqual(obj2) } describe('deep v1 (resursively)', () => { it('copies an object completely', () => { return testDeep(deepv1) }) }) describe('deep v2 (JSON.parse/JSON.stringify)', () => { it('copies an object completely', () => { return testDeep(deepv2) }) })
Version 1
Our first implementation works recursively. We write a deep
function, which checks the type of the argument sent to it and either calls an appropriate function for the argument being an array or an object or simply returns the value of the argument (if it is neither an array nor an object).
function deep<T>(value: T): T { if (typeof value !== 'object' || value === null) { return value } if (Array.isArray(value)) { return deepArray(value) } return deepObject(value) }
The deepObject
function takes all of the keys of an object and iterates over them, recursively calling the deep
function for each value.
function deepObject<T>(source: T) { const result = {} Object.keys(source).forEach((key) => { const value = source[key] result[key] = deep(value) }, {}) return result as T }
So, deepArray
iterates over the provided array, calling deep
for every value in it.
function deepArray<T extends any[]>(collection: T) { return collection.map((value) => { return deep(value) }) }
Version 2
Now, let’s take a different approach. Our goal is to create a new object without any reference to the previous one, right? Why don’t we use the JSON
object then? First, we stringify
the object, then parse
the resulting string. What we get is a new object totally unaware of its origin.
Note: In the previous solution the methods of the object are retained but here they are not. JSON
format does not support functions, therefore they are just removed altogether.
function deep<T extends object>(source: T): T { return JSON.parse(JSON.stringify(source)) }
Performance test
We can see that the first version is faster.
Click here to run the tests yourself.
When to use
Deep copying should be used whenever we feel like there might be a need to change a given object on a deeper level (nested objects/arrays). I would, however, recommend trying to use it only when absolutely necessary since it can often slow the program down when working with big collections of objects.
Production-ready equivalent
3. Assign
Here, we will take multiple sources
and shallow copy their respective properties to a single target, therefore this is going to look very much like an implementation of Object.assign
.
Tests
describe('assign v1 (copy props)', () => { it('assigns objects properties correctly', () => { const obj1 = { one: true } const obj2 = { two: true } expect(assignv1(obj1, obj2)).toEqual({ one: true, two: true }) }) it('mutates the target', () => { const obj1 = { one: true } const obj2 = { two: true } assignv1(obj1, obj2) expect(obj1).toEqual({ one: true, two: true }) const obj3 = { three: true } const obj4 = { four: true } const obj5 = assignv1({}, obj3, obj4) expect(obj5).not.toBe(obj3) expect(obj5).not.toBe(obj4) expect(obj5).toEqual({ three: true, four: true }) }) }) describe('assign v2 (spread operator)', () => { it('assigns objects properties correctly', () => { const obj1 = { one: true } const obj2 = { two: true } expect(assignv2(obj1, obj2)).toEqual({ one: true, two: true }) }) it('does not mutate the target', () => { const obj1 = { one: true } const obj2 = { two: true } const obj3 = assignv2(obj1, obj2) expect(obj1).not.toEqual({ one: true, two: true }) expect(obj3).not.toBe(obj1) expect(obj3).toEqual({ one: true, two: true }) }) })
Version 1
Here, we just take each source
object and copy its properties to the target
, which we normally pass as {}
in order to prevent mutation.
const assign = (target: object, ...sources: object[]) => { sources.forEach((source) => { return Object.keys(source).forEach((key) => { target[key] = source[key] }) }) return target }
Version 2
This is a safe version in which, instead of mutating the target
object, we create an entirely new one which we later assign to a variable. This means we don’t need to pass the target
argument at all. Unfortunately, this version does not work with the keyword this
because this
can’t be reassigned.
const assign = (...sources: object[]) => { return sources.reduce((result, current) => { return { ...result, ...current, } }, {}) }
Performance test
The first version is much faster because it directly alters (“mutates”) the target
object whereas the second one creates a new one for each source
.
Click here to run the tests yourself.
When to use
Version 1 is the standard implementation of an assign
function. By passing {}
as the target
we can be sure that no object is mutated. We would like to use assign
whenever there is a need to assign some new properties to an existing object, for example:
// safe const props = Object.assign({}, defaultProps, passedProps) // with mutations const props = {} Object.assign(props, defaultProps, passedProps)
Production-ready equivalent
Object.assign()
or lodash.assign()
.
4. Merge
This function works like assign but instead of replacing properties in the target
it actually adjoins them. If a value is either an array or an object the function proceeds to merge the properties recursively as well. Non-object-like properties (not arrays and not objects) are simply assigned and undefined
properties are omitted altogether.
Tests
const testMerge = (mergeFn: MergeFn) => { const obj1 = { prop1: { prop2: { prop3: [1, 2, 6], prop4: true, prop5: false, prop6: [{ abc: true, abcd: true }], }, }, } const obj2 = { prop1: { prop2: { prop3: [1, 2, undefined, 4, 5], prop4: false, prop6: [{ abc: false }], }, prop7: true, }, } expect(mergeFn({}, obj1, obj2)).toEqual({ prop1: { prop2: { prop3: [1, 2, 6, 4, 5], prop4: false, prop5: false, prop6: [{ abc: false, abcd: true }], }, prop7: true, }, }) } describe('merge v1 (recursively)', () => { it('it merges provided objects into one', () => { return testMerge(mergev1) }) }) describe('merge v2 (flatten props)', () => { it('it merges provided objects into one', () => { return testMerge(mergev2) }) })
Version 1
What we are going to look at now bears some resemblance to the first version of our deep copy function. This is because we are going to work with a recursive use of functions.
Function mergeValues
accepts two arguments: target
and source
. If both values are objects we call and return mergeObjects
with the aforementioned target
and source
as arguments. Analogically, when both values are arrays we call and return mergeArrays
. If the source
is undefined
we just keep whatever value was previously there which means we return the target
argument. If none of the above applies we just return the source
argument.
function mergeValues(target: any, source: any) { if (isObject(target) && isObject(source)) { return mergeObjects(target, source) } if (Array.isArray(target) && Array.isArray(source)) { return mergeArrays(target, source) } if (source === undefined) { return target } return source }
Both mergeArrays
and mergeObjects
work the same way: we take the source
properties and set them under the same key in the target
.
function mergeObjects(target: object, source: object) { Object.keys(source).forEach((key) => { const sourceValue = source[key] const targetValue = target[key] target[key] = mergeValues(targetValue, sourceValue) }) return target } function mergeArrays(target: any[], source: any[]) { source.forEach((value, index) => { target[index] = mergeValues(target[index], value) }) return target }
Now all that is left to do is to create a merge
function:
const merge = (target: object, ...sources: object[]) => { sources.forEach((source) => { return mergeValues(target, source) }) return target }
Version 2
This approach may actually seem odd to you because we can easily predict that it is going to be slower. It is, however, worthwhile to take a look at different angles from which we can tackle the same problem.
The idea here is that we want to first get all the properties of the source
object — even if they are nested three objects deep — and save a path
to them. This will later allow us to set the value at the proper path inside the target
object.
A path
is an array of strings that looks something like this: [‘firstObject’, ‘secondObject’, ‘propertyName’]
.
Here is an example of how this works:
const source = { firstObject: { secondObject: { property: 5, }, }, } console.log(getValue(source)) // [[[{ value: 5, path: ['firstObject', 'secondObject', 'property']}]]]
We call the getValue
function to get an array of objects that contain paths and values of the properties. Let’s take a look at how this function works. If the argument value
is null
or is not object-like we simply, since we can’t go any deeper, return an object containing the argument value
and its path.
Otherwise, if the argument is object-like and not null
, we can be sure it is either an array or an object. If it is an array we call getArrayValues
and if an object — getObjectValues
.
function getValue(value: any, path: (number | string)[] = []) { if (value === null || typeof value !== 'object') { return { value, path: [...path], } } if (Array.isArray(value)) { return getArrayValues(value, path) } return getObjectValues(value, path) }
Both getArrayValues
and getObjectValues
iterate over properties calling getValue
for each with the current index
/key
now appended to the path
.
function getArrayValues(collection: any[], path: (number | string)[] = []) { return collection.map((value, index) => { return getValue(value, [...path, index]) }) } function getObjectValues(source: object, path: (number | string)[] = []) { return Object.keys(source).map((key) => { const value = source[key] return getValue(value, [...path, key]) }) }
After getting the paths and values of an entire source
object we can see that they are deeply nested. We would, however, like to keep all of them in a single array. This means that we need to flatten
the array.
Flattening an array boils down to iterating over each item to check if it is an array. If it is we flatten
it and then concat
the value to the result array.
function flatten(collection: any[]) { return collection.reduce((result, current) => { let value = current if (Array.isArray(current)) { value = flatten(current) } return result.concat(value) }, []) }
Now that we have covered how to get the path
let’s consider how to set all these properties in the target
object.
Let’s talk about the setAtPath
function that we are going to use to set the values at their respective paths. We want to get access to the last property of the path to set the value. In order to do so, we need to go over the path’s items, that is of properties’ names, and each time get the property’s value.
We start the reduce
function with the target object which is then available as the result
argument. Each time we return the value under result[key]
it becomes the result
argument in the next iteration. This way, when we get to the last item of the path the result
argument is the object or array where we set the value.
In our example the result
argument, for each iteration, would be: target
-> firstObject
-> secondObject
.
We have to keep in mind that the target
might be an empty object whereas sources can be many levels deep. This means we might have to recreate an object’s or an array’s structure ourselves before setting a value.
function setAtPath(target: object, path: (string | number)[], value: any): any { return path.reduce((result, key, index) => { if (index === path.length - 1) { result[key] = value return target } if (!result[key]) { const nextKey = path[index + 1] result[key] = typeof nextKey === 'number' ? [] : {} } return result[key] }, target) }
We set the value at the last item of the path
and return the object we started with.
if (index === path.length - 1) { result[key] = value return target }
If inside the firstObject
there were no secondObject
we would get undefined
and then an error if we tried to set undefined[‘property’]
. To prevent this we first check if result[key]
even exists to begin with. If it doesn’t we need to create it — either as an object or as an array but how can we know which? Well, the next item in the path is the answer. If the type of the next item is a ‘number’ (so effectively an index) we need to create an array. If it is a string, we create an object.
if (!result[key]) { const nextKey = path[index + 1] result[key] = typeof nextKey === 'number' ? [] : {} }
All that is left to do is to create the merge
function which ties everything together.
function merge(target: object, ...sources: object[]) { return flatten( sources.map((source) => { return getValue(source) }), ).reduce((result, { path, value }) => { if (value === undefined) { return result } return setAtPath(result, path, value) }, target) }
Performance test
We see that, as expected, the first version runs much faster.
Click here to run the tests yourself.
When to use
Merging objects is not very common. We might, however, find ourselves in a situation where we want to, for example, merge configs with a lot of deep properties in order to set some nested default values.
Note: Merging actually doesn’t lose references to sources. If we wanted to lose them we could create a deep copy of a merged object.
Production-ready equivalent
Conclusion
To sum up, we use shallow copy when we need to get rid of a reference to an object but we care little about references to any of its deeper properties, for example when returning from a function. Deep copy ensures that there are no references to the source object or any of its properties but comes at a cost of slowing down the application. Assign is a great way to merge properties of objects together or just to assign some new values to an existing object. Finally, merge, albeit not very popular, allows us to merge properties of objects no matter how deeply nested the objects are.
LogRocket: Debug JavaScript errors easier by understanding the context
Debugging code is always a tedious task. But the more you understand your errors the easier it is to fix them.
LogRocket allows you to understand these errors in new and unique ways. Our frontend monitoring solution tracks user engagement with your JavaScript frontends to give you the ability to find out exactly what the user did that led to an error.

LogRocket records console logs, page load times, stacktraces, slow network requests/responses with headers + bodies, browser metadata, and custom logs. Understanding the impact of your JavaScript code will never be easier!
Try it for free.