paint-brush
Big O for Beginnersby@ruairidh
113,353 reads
113,353 reads

Big O for Beginners

by Ruairidh Wynne-McHardyNovember 26th, 2018
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Big O Notation allows us to measure the time and space complexity of our code.

Company Mentioned

Mention Thumbnail
featured image - Big O for Beginners
Ruairidh Wynne-McHardy HackerNoon profile picture

Photo by apoorv mittal on Unsplash

Big O Notation allows us to measure the time and space complexity of our code.

Think of the example of a for loop. You can run it over an array of 5 items and it will run pretty quickly, but if you ran it over an array of 10,000 items then the execution time will be much slower. See an example:

Big O notation allows us to work out how long an algorithm will take to run. This lets us understand how a piece of code will scale. It measures algorithmic efficiency.

O(1)

This is known as constant time. The time is consistent for each execution. Imagine if you were a gas station attendant. It takes you exactly 25 seconds to fill up a car, and then that particular task is considered to be complete. It doesn’t matter if you fill one car, or a hundred cars that day, you have filled the car in a consistent amount of time.

So as per our example above, we don’t care about O(1), O(2), etc. We round it down to O(1) which is to say that our operation is a flat line in terms of scalability. It will take the same amount of time. This is predictable and very scalable. See an example:

O(n)

Our loop example is O(n) as it runs for every value in our input. The operations increase in a linear fashion according to the inputs. (n) represents the number of inputs. The algorithm runs in linear time.

O(n²)

Say we wanted to log a series of pairs from an array of items. We can do it like so:

A good rule of thumb is that if you see nested loops, then you use multiplication to work out the notation. So in our example above, we are doing O(n *n) which becomes O(n²) (O of n squared). This is known as quadratic time which means that every time the number of elements increases, we increase the operations quadratically.

You actually want to avoid code which runs in O(n²) as the number of operations increases significantly when you introduce more elements.

Calculating Big O

To calculate Big O, you can go through each line of code and establish whether it’s O(1), O(n) etc and then return your calculation at the end. For example it may be O(4 + 5n) where the 4 represents four instances of O(1) and 5n represents five instances of O(n).

There’s an easier way to calculate this however, and that’s done with the following four rules:

  1. Assume the worst
  2. Remove constants
  3. Use different terms for inputs
  4. Drop any non dominants

Taking each of these in turn:

  1. Assume the worst

When calculating Big O, you always think about the worst case. For example, if you were to loop over an array and look for an item, it could be the first item or it could be the last. As we aren’t certain then we must assume O(n) in this instance.

2. Remove constants

This is a little bit trickier but bear with me. Consider this code:

We have a function which does several things. Some are O(1) such as line 3, whereas others are O(n) such as line 9.

We could express the Big O notation of this as O(1 + n/2 +100) but it’s really too specific. We can remove our O(1) operations because, as a rule of thumb, they are likely to be insignificant compared to our O(n) operations. If we are providing a million items in our input array, then an extra 100 operations from O(1) aren’t our main concern.

So then we have O(n / 2) — as n gets larger and larger, dividing it by two has a diminishing effect. So ultimately our Big O notation for this function becomes O(n).

  1. Different terms for inputs

If we were to loop twice of the same array then our Big O is technically O(2n) but let’s drop the constants so we are left with O(n). But now we need to think about different terms for inputs. What does that mean?

If we look at above code, we can see we now have two inputs. One could be one item long, the other could contain a million items. So our function is no longer O(n), it is O(a + b). The n we use in our notation is arbitrary, so we need to reflect both inputs in our notation.

In this instance our loops aren’t nested. If they were then our code would be O(n²) which would be a potential target for refactoring. Typically if there are nested loops then you are multiplying rather than adding the variables.

4. Drop non-dominant terms

Take a look at this code:

So we have a single loop at the top which is O(n) and then a nested loop which is O(n²). So our total is O(n + n²). But as we saw in rule #2, we want to drop constants. So which of the two terms is more important to us?

In this case we drop O(n) and retain O(n²) as this is more important. It is the dominant term as it has a much heavier impact on performance.

Conclusion

So what’s the point?

If we know we will only deal with small inputs then Big O doesn’t really matter too much. But when when writing you code, you should consider its scalability.

By paying attention to Big O you have an eye to the future and you are more likely to write code which can be scaled effectively. You can start looking at the cost of your code and make informed choices in how you write it.

Resources

http://bigocheatsheet.com/


Big-O notation_Read and learn for free about the following article: Big-O notation_www.khanacademy.org