When you hear “sorting algorithm.” What comes to mind? Quick sort? Merge sort? Counting sort? Bubble sort? The sorting hat from Harry Potter and its algorithm to place students in a house?
All jokes aside, I think it’s important to recognize these algorithms because well, where would we be without sorting? After all, in order to employ Binary Search, we must have our elements sorted. How would we look up words in a dictionary, or names in a phonebook? A lot of our day to day mundane tasks are a lot simpler because of sorting.
By sorting, we mean to put elements in a list in a certain order. That could be in numerical order or lexicographical order(alphabetical order).
Even though most programming languages include some kind of sort function out of the box, it’s good to take a step back and pick them apart so we actually know what’s going on behind the scenes. In this week’s blog, we will explore what’s happening under the hood with various sorting algorithms in JavaScript. We will start with less performant algorithms and work our way to more performant algorithms.
Best Case: Ω_(n)Average Case:_ Θ_(n²)Worst Case: O(n²)_
Worst Case: O(1)
Back in 2007 during his campaign run, Obama had visited Google and had this little exchange with then CEO, Eric Schmidt during a Q&A session.
Eric Schmidt: What is the most efficient way to sort a million 32-bit integers?President Obama: Well…Eric Schmidt: Maybe — I’m sorry…President Obama: No, no, no, no. I think — I think the bubble sort would be the wrong way to go.Eric Schmidt: [facepalm] Come on. Who told him this? Okay. I didn’t see computer science in your background.President Obama: We’ve got our spies in there.
Bubble sort is not the greatest, apparently even Obama was briefed on this before the interview. The reason it’s not is because we have many more efficient alternative solutions to handle sorting now.
Let’s go to the code!
two implementations of bubble sort
If you are not familiar with bubble sort at all or have forgotten how it works, I recommend searching for a visualization of it online. Back when I learned about sorting, visualizing the steps helped my comprehension of them.
The way bubble sort works is that for each element we iterate through, we want to check to see if the next element in index is less than it. If it is, we will essentially swap the elements. In both implementations you can see that I am using a temp variable just as a placeholder so that I can properly switch the elements in the array.
In the second implementation I am using a while loop that runs while swapped is true. At the beginning of an iteration we immediately set swapped to false. If we detect an element that requires a swap, we set swapped to true and continue swapping elements.
Best Case: Ω_(n²)Average Case:_ Θ_(n²)Worst Case: O(n²)_
Worst Case: O(1)
Selection sort is known for its simplicity. Like bubble sort, selection sort is inefficient when it comes to large lists or arrays. However, because the space complexity is constant, it has some applications still and has not completely fallen into obscurity.
selection sort
In this implementation of selection sort, you can see we have a helper function minAndRemove is responsible to find the minimum element from the array and remove that element from the array, then return it.
We are slowly chopping down the given array. Each time we find the minimum element, we note the element, use splice shorten the array, then push the noted element into a new array. We continue to do this until there are no more elements left. Our new array is guaranteed to be sorted.
Best Case: Ω_(n log(n))Average Case:_ Θ_(n log(n))Worst Case: O(n log(n))_
Worst Case: O(n)
Pretty solid speed on merge sort. However, Uncle Ben(Spiderman reference) once said “…with great time complexity, comes not so great space complexity”. I’m just kidding, Uncle Ben never said that.
merge sort
The idea behind merge sort is interesting. We are essentially dividing this sorting job into smaller sub-problems. Whenever I hear the term sub-problem, I immediately think to use recursion. We will be employing a “divide and conquer” style algorithm. This is a great place to use recursion.
In our implementation of merge sort, the first thing we do is split our array into two halves. We then want those two halves to be sorted. Those two halves then are split again into two halves. Our main function will split the array in half and then send both halves as arguments to our helper function, which will then one by one compare each element to and return a new sorted array. The main merge function will use recursion to cut the array in half until it hits the base case. Then we take the two halves that each have one element and start to build up our sorted array. As the recursive call stack unwinds, we begin to add more and more elements to our new array until…voila! Sorted!
Best Case: Ω_(n log(n))Average Case:_ Θ_(n log(n))Worst Case: O(n²)_
Worst Case: O(n log(n))
Many standard libraries have a sort function built in. Ruby and JavaScript both come with a sort function out of the box. Under the hood, these functions are using the quick sort algorithm.
What’s awesome about quick sort is that it uses very little space, and it is quite fast. Sure, the worst case can end up taking quadratic time but for the most part, this is a performant algorithm.
Let’s take a look at an implementation.
quick sort
Similar to the implementation of Merge sort, our implementation of Quick sort will employ “divide and conquer”. In addition to that, we will utilize a “pivot.” The pivot is the deciding factor in determining whether or not a number should be placed in the lesser array or the greater array. Once we have split the arrays, we will then recursively call quickSort on the greater array. Since we are dividing this problem into sub-problems we can use recursion to help us.
Best Case: Ω_(n+k)Average Case:_ Θ_(n+k)Worst Case: O(n+k)_
Worst Case: O(k)
Counting sort is a very fast and somewhat space efficient algorithm. but is really only applied to sorting numbers. It completely avoids making any kind of comparisons like all of the algorithms we have discussed.
The technique used is based on keys between a specific range. It works by counting the number of objects having distinct key values (kind of like hashing). Then doing some arithmetic to calculate the position of each object in the output sequence.
counting sort
Let’s walk through the steps of this implementation. We need to know the max number before going in.
Step 1 — allocate an array numCounts where the indices represent numbers from our array and the values represent how many times that number appears. Each value starts at zero.
Step 2 — in one pass of the input array, update numCount as we go.
Step 3 — allocate another array called sortedArray where we will store elements once they are sorted
Step 4 — in one pass of numCounts put each number the correct number of times, into sortedArray
There are many other interesting sorting algorithms that we have not discussed here such as Radix sort, Heap sort, Cube sort and so on.
My belief is that that the more we know the fundamentals, be it data structures, or sorting algorithms, the better we’ll become at understanding various problems and discovering their solutions.
Get cracking!