Note: This is part of the “Javascript and Functional Programming” series on learning functional programming techniques in JavaScript ES6+. Checkout the previous post on function currying <Part 4>. Start from the beginning here.
Let’s get practical! Previously, we examined higher order functions. As a quick reminder_,_ a higher order function is a function that accepts another function as a parameter.
Javascript arrays have several built-in methods that are higher order functions.
This post will discuss the 3 most popular ones: filter, map and reduce. 🙂🙂
the filter array method creates a new array with all elements that pass the test implemented by the provided function.
That was straight out of the docs. In a friendlier way, filter is a method that runs on a given collection/array, and filters item based on a function that returns a boolean (true or false) value.
Let’s jump into an example and then step through what is actually happening. We will assume the following collection for our example.
const iceCreams = [{ flavor: 'pineapple', color: 'white' },{ flavor: 'strawberry', color: 'red' },{ flavor: 'watermelon', color: 'red' },{ flavor: 'kiwi', color: 'green' },{ flavor: 'mango', color: 'yellow' },{ flavor: 'pear', color: 'green' }];
Let’s use the filter method to create a new array with only red colored ice cream. Remember, filter creates a new array, therefore we are going to have to save the output of the function into a variable to console it later.
const favoriteFlavors = iceCreams.filter(iceCream => iceCream.color === 'red');
console.log(favoriteFlavors);
Running this snippet in the console will result in the following output:
If you’re feeling a little confused that’s fine. Let’s walk through it :)
the .filter method accepts a function with 4 arguments, in the following order
Altogether we need to supply a function with the following signature
Notice that in our example we passed an anonymous function (it is not named!) We could have selected to pass a named function as well, as so:
const getRed = icecream => icecream.color === 'red';
const favoriteFlavors = iceCreams.filter(getRed);
console.log(favoriteFlavors);
The output will look like this:
Please note, that we passed the getRed function that is implicitly called with the element parameter.
filter is a great way to quickly parse data, leaving only what is relevant to us. The idea of passing a function that adheres to a certain structure may seem a bit awkward if your reading this for the first time, but ends up being quite powerful. First of all, this structure make it easier for developers to read each others code. Second of all, we are going to use this same pattern right now, when examining the .map array method 😎😎😎😎😎
The **map()**
method creates a new array with the results of calling a provided function on every element in the calling array. Essentially, the map method creates a new array, based on the initial array. A quick look at the map() signature:
Just like filter(), map() is another higher order function. Similar to the filter() method structure we must pass a function here as well. But instead of filtering items in the original array, we transform data.
🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔
Let’s look at an example to clarify! We will use the array from the previous example.
const iceCreams = [{ flavor: 'pineapple', color: 'white' },{ flavor: 'strawberry', color: 'red' },{ flavor: 'watermelon', color: 'red' },{ flavor: 'kiwi', color: 'green' },{ flavor: 'mango', color: 'yellow' },{ flavor: 'pear', color: 'green' }];
Let’s suppose we want to create a new array of strings, with all the flavors of ice cream. Before we use map() let’s try doing it the old school way with a classic for loop.
let flavors = [];for (let i = 0; i < iceCreams.length; i++) {flavors.push(iceCreams[i].flavor)}
console.log(flavors);
Our output from the for loop code snippet
Technically, this works well for this trivial example. But do you see some of the problems that creep up when using for loops like this? My pessimism sees 3 opportunities to mess up here 😳😳😳
let i = 0;
i < iceCreams.length
i++
In different scenarios and depending on our data it is easy to write mistakes here. It could be a typo as simple as forgetting a semi colon, or mistakenly instantiation the iterator to the wrong value.
Let’s try the same thing with the map() method.
const iceCreams = [{ flavor: 'pineapple', color: 'white' },{ flavor: 'strawberry', color: 'red' },{ flavor: 'watermelon', color: 'red' },{ flavor: 'kiwi', color: 'green' },{ flavor: 'mango', color: 'yellow' },{ flavor: 'pear', color: 'green' }];
const flavors = iceCreams.map(icecream => icecream.flavor)console.log(flavors)
OMG?!!
Our output is identical, but notice how clean and terse the code is. No indexes, no semi colons, no declaring data length! Since programming with map() and filter(), I’ve noticed that the ease of using these functional methods grows significantly in comparison to loops, based on the complexity of our collections /arrays. These are definitely worth integrating into your day to day programming workflow.
Finally! As Christian Sakai mentioned in a previous comment, reduce is the granddad / grandma of all of these methods 👵🏻👵🏻👵🏻👵🏻👵🏻👵🏻👵🏻
According to the documentation:
The
**reduce()**
method applies a function against an accumulator and each element in the array (from left to right) to reduce it to a single value.
This is pretty cryptic! Let’s “reduce” the meaning of this to something simple. Let’s circle back to the filter(), and map() methods. What do they have in common? Essentially, they transform a collection / array into a different collection / array. But these methods are specific in how they transform the data. In comparison, reduce() is like the swiss army knife of list transformations. It can be used to express any transformation! In fact, we can even use reduce() to implement map() and filter(). Enough talking! Let’s take a look at the classic reduce example of summing up an array 🙃 🙃 🙃 🙃
First with for loop
const arr = [10,20,30]
let total = 0;for(let i = 0; i < arr.length; i++) {total += arr[i]
}
console.log(total);
Now with reduce
const arr = [10, 20, 30];
const reducerFunction = (acc, currentItem) => acc + currentItem;
const sum = arr.reduce(reducerFunction, 0);console.log(sum);
It is clear that we got the same result. Let’s break it down. The built-in array method reduce() requires a callback function as the first parameter. This callback function is predetermined in its input, accepting up to 4 arguments, similar to the callbacks that the filter() and map() expect. Let’s look at the expected reducer() function signature.
reduce() second parameter is optional and is the initialValue. When we iterate over an array and attempt to reduce it to a single value, it is recommended to instantiate the initial value. In our array summation example, we instantiated the initialValue to zero. What would happen had we not instantiated the initial value?
const arr = [10, 20, 30];
const reducerFunction = (acc, currentItem) => acc + currentItem;
// Not instantiating the initial value!const sum = arr.reduce(reducerFunction);console.log(sum);
We get the same value! Why is that? According to the documentation, when no initialValue is supplied, the first element in the array will be used. Although this example works out without supplying an initial value, I recommend getting used to always supplying one. This is going to prevent future bugs, and also require you to think if the reduction you’re trying to do on your array makes sense.
Earlier I claimed that reduce() is the grandfather of list transformation methods, because we can use it to implement all of them. Let’s prove this!
map with reduce()
// ************* Map with Reduce *************
const data = [10, 20, 30];
const tripledWithMap = data.map(item => {return item * 3;});
const tripledWithReduce = data.reduce((acc, value) => {acc.push(value * 3);return acc;}, []);
console.log(tripledWithMap, tripledWithReduce);
filter with reduce()
// ************* Filter with Reduce *************
const data2 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const evenWithFilter = data2.filter(item => {return item % 2 === 0;})
const evenWithReduce = data2.reduce((acc, value) => {if (value % 2 === 0) {acc.push(value);}
return acc;}, []);
console.log(evenWithFilter, evenWithReduce);
Let’s do something a bit more useful with our reduce() method. Let’s calculate the results of a vote for the best ice cream flavor 🍦🍦🍦🍦🍓🍋 🍌 🍉 🍇
const flavours = ["strawberry","strawberry","kiwi","kiwi","kiwi","strawberry","mango","kiwi","banana"];
const votes = {};const reducer = (votes, vote) => {votes[vote] = !votes[vote] ? (votes[vote] = 1) : votes[vote] + 1;
return votes;};const outcome = flavours.reduce(reducer, votes);
// Outputconsole.log("Strawberry: ", outcome.strawberry);console.log("Kiwi: ", outcome.kiwi);console.log("Mango: ", outcome.mango);console.log("Banana: ", outcome.banana);
Similar to any time we will use reduce, we are calling the reduce() method on an array and supplying a callback and initialValue. Notice that in this example, we set the initial value to an empty object. Without that initialization this would have failed!
First of all, let’s define flattening data for our example. Flattening looks like this:
[[a, b, c], [d, e, f], [g, h i]] -> [a, b, c, d, e, f, g, h, i]
Essentially we want to merge all the arrays in the order in which they appear. reduce() solves this elegantly 🤗🤗
const letterArr = [['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']];const flattened = letterArr.reduce((acc, val) => {return acc.concat(val);}, []);
console.log(flattened);
A common pattern is to chain list transformers. Although it makes complicated transformations easier to read it lacks in speed when dealing with very large arrays. Let’s look at an example.
let bigData = [];for (let i = 0; i < 1000000; i++) {bigData[i] = i;}
// Slowlet filterBegin = Date.now();const filterMappedBigData = bigData.filter(value => value % 2 === 0).map(value => value * 2);
let filterEnd = Date.now();let filtertimeSpent = (filterEnd - filterBegin) / 1000 + "secs";
// Fastlet reducedBegin = Date.now();const reducedBigData = bigData.reduce((acc, value) => {if (value % 2 === 0) {acc.push(value * 2);}return acc;}, []);let reducedEnd = Date.now();let reducedtimeSpent = (reducedEnd - reducedBegin) / 1000 + " secs";
console.log("filtered Big Data:", filtertimeSpent);console.log("reduced Big Data:", reducedtimeSpent);
Why is the chaining of filter and map so slow? First filter() needs to iterate through the whole array (1,000,000) and filter half. Then map() iterates through the array remnants (500,000) and creates new array. In comparison, in reduce() we only iterate through array once! Not constantly iterating over the same data set is more efficient but less performant.
If you’re interested in more tech and startup related content you can follow me on Medium, Instagram, Github and Linkedin.