March 30th 2020

Simply put Big O Notation is used to describe the performance of an algorithm. Big O specifically is used to describe the worst-case scenario, there are other notations such as Big Theta(θ) and Big Omega(Ω) used to describe the average case scenario and best-case scenario of the algorithm.

But we as programmers are mostly concerned with the worst-case scenario and thus we will be focusing on the Big O Notation. It is my hope that this article will help you understand the basics of craziness that is Big O.

Before we start with understanding Big O, we need to know why we actually need Big O.

It’s very important for interviews and you need it to get that job! Other than that it provides a vocabulary to talk about how our code is performing and to decide which approach to take to solve a problem.

Imagine we have multiple people working on a problem, everyone has their own individual way of solving the problem, and the amazing thing is they all work but programming is not just about solving problems, it’s about solving problems efficiently, so how do we determine which way of solving the problem is the “**best**”?

This is where Big O helps you identify how quickly our run time grows, we do this by using the size of the input which by standards we call ‘n’ and compare it to the number of operations that increase.

And what does ‘Best’ even mean? Wouldn’t it be nice to have a scale or a chart to describe what’s good and what’s bad? I got just the thing for you.

Image credits:https://i0.wp.com/www.jessicayung.com/wp-content/uploads/2016/08/screenshot-5.png?fit=846%2C591

And if it’s the first time you are seeing this crazy syntax of O(n^2) or O(N log N) and you have no idea what is going on, don’t worry about it, I will get to it and you will get everything just fine, for now, the sections in red are bad and the sections in green are great!

P.S I will be using JavaScript throughout the article but you can use any language of your choice.

The first Big O Notation that we have is O(1), it simply means that the operation will always execute in a constant amount of time regardless of how big the input size is.

In the above example, the function is simply logging an element of the given array in the console. How many operations does this function take if the size of the array is increased to 1,000 or let’s say a 100,000?

Since we are just grabbing a single element inside the array, no matter how big the array size gets, the time it takes always remains constant or O(1).

Let me introduce you to the most common Big O Notation, the O(n) notation, let us see this with the help of an example.

What we want to find out here is how do the function ‘*sumOfArray*’ and its runtime grow as the input size is increased. Maybe the array has 1 item, maybe it has 100, a 1000 or even a million items in the array. How does the efficiency of this function increase?

In the code above, let’s say we have 10 elements in the array, then for an array of 10 elements we are looping 10 times to get our desired solution. If the array held 100 items, we would loop through the array 100 times.

As the number of inputs increases, the number of operations increases linearly. We say that the above code has the time complexity of O(n) or It takes Linear Time to solve this problem. Here **n **simply means that the Big O depends on the number of inputs.

Remember O(n), the number of operations it performs scales in *direct* proportion to the input.

In the case of O(n^2), the number of operations it performs scales in proportion to the *square* of the input. This is common with algorithms that involve nested loops or iterations.

Let’s say we want to find the sum of every pair of a given array

In the above example for every iteration of ‘*i*’(outer loop), we are performing n operations of the inner loop and we are operating on the outer loop n times

Or simply put we are performing a total *i * j *operations, or, as we are using the same input, *n * n*, or n^2

We are entering dangerous waters here, in interviews and in implementation we want to avoid a situation where the time complexity of our algorithm reaches O(2^n) as it denotes an algorithm whose growth **doubles** with each addition to the input data set.

The growth of this algorithm is exponential, let’s say we have n=2 inputs, then it will perform 4 times, with n=3, it will perform operations 8 times and so on.

In the above example if the input parameter n is given the value of 10, then it will perform operations 2^10 times which is 1024 times.

Now we haven’t touched the logarithms yet, and that’s because they are slightly tricky to understand. We usually encounter log when we are searching through data sets.

The most common example to demonstrate O(log n) is the ever classic Binary Search.

Now Binary search works on a sorted set of data, and it is used to find the location of an element in data by dividing the input in half with each iteration.

Instead of going into code, let’s take a simple example to understand the concept. Let’s say we have a phonebook(yes I know they are obsolete, but let’s consider we went to a museum for this example).

So let’s say we have a phonebook and we need to find the contact detail of Agent Smith

Image Credits:https://3.bp.blogspot.com/_h-QV7eIsayI/TUjTyGeMN9I/AAAAAAAABiY/bmTXD97rFQY/s1600/lunapic_129514895144669_4.jpg

Matrix reference aside, Smith Starts with S, so somewhere in the second half of the phonebook and why not let’s say that our phonebook is a 1000 pages.

So we open up to the middle of the phonebook, we see that Smith lies somewhere on the right side of the middle, and we tear away the left.

We just halved our input! From 1000 to 500 elements.

We again open the book from the middle, figure out Smith lies somewhere on the left side of the middle and we tear away the right side, halving our input again from 500 to 250 elements.

In just 2 iterations, we have removed 3 quarters of the input and are left with n/4 elements of the input, in 3 Iterations we are just left with n/8 elements.

We keep going in this fashion until we find our element, so in roughly about 10-11 iterations we have gone through a data set of 1000 elements and found our element, that is what we call logarithmic time complexity

Let’s say we have an updated phonebook and now we have 2000 pages in the phonebook, the total number of operations increase by just 1, thus the time complexity of Binary Search is O(log n)

Well, that was certainly a lot to go through and do take your time to digest everything and go get yourself some well-earned treats because you deserve it!

Now this article covers only the basics of Big O Notation and there is certainly a lot more to understand but it was a good start and now you know the basics just enough to go in deep if you want to 😄

(Originally published here)