paint-brush
Understanding Time Complexity Of Algorithms - Part 1by@ps06756
370 reads
370 reads

Understanding Time Complexity Of Algorithms - Part 1

by Pratik SinghalMarch 18th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Time complexity analysis is crucial for coding interviews and algorithm design. It helps in understanding how algorithms perform as input size increases. Learn about its basics, importance, measurement methods like Big O Notation, and practical examples to master algorithm analysis.
featured image - Understanding Time Complexity Of Algorithms - Part 1
Pratik Singhal HackerNoon profile picture


Analysing time and space complexities of algorithms is one of the fundamental skills that you should be aware of if you are preparing for coding interviews.


When given a problem to solve, if you are good with time complexity analysis you would be able to quickly weed out bad ideas and don’t waste time in thinking through suboptimal approaches. This way you can quickly try out multiple approaches (in your head) thinking through their time complexity and only present the optimal algorithm when it comes to you


This saves a lot of time during the interview as you wouldn’t waste time in talking through the suboptimal approaches during the interview.


This is going to be the first article in a three-part series on analyzing time and space complexities of any algorithm. We will start from the basics and go over the different cases of time complexity in this article.


In the 2nd part, we will go over the time complexity and Big O Notation for more complex use cases.


In the 3rd part, we will go over the space complexity use cases.


What exactly is time complexity?

Time complexity of an algorithm determines the growth in runtime of said algorithm with respect to its input. Rather than focusing on the actual runtime of an algorithm (which can vary depending on the system on which you run the algorithm), we focus on how fast the algorithm’s runtime grows w.r.t its input.


For example, let’s say any hypothetical algorithm takes X seconds to run for an array of size N, and we expect its time to double every time we double the input array size, its growth rate can be expressed as O(N). (Pronounced Big - O of N).


In fact, if we expect the algorithm’s runtime to increase in proportion to N every time we can say that the complexity is O(N)


Why do we use time complexity to analyze algorithms?

We use time complexity to analyze any algorithm for two main reasons:


  1. Growth Rate matters more than absolute time: Usually, a suboptimal algorithm can perform better for a smaller input but may take too much time when working with a bigger input. Let’s say we have two algorithms A and B which search for a word in N documents. Let’s say A takes only 0.1 seconds to search among 100 open documents but B takes 1 second to search among 100 open documents, does it mean that A is better than B?


    Not necessarily, it could mean that the growth rate in time for A may be much higher than that of B. For example, the time taken by A can double as the number of open documents double but the time taken by B can only increase by 5% when the number of open documents double. In this case, even though algorithm A is much better in the beginning, it will start taking more time compared to algorithm B (see the graph below).





  1. Growth Rate is easier to measure and does not vary: The growth rate of an algorithm can be measured purely by doing theoretical calculations and doesn’t require running an algorithm on a system (which requires significantly more time).

Time complexity measures the growth rate in the runtime of the algorithm


How exactly is time complexity measured?

Time complexity of an algorithm is measured by approximating the number of operations done by the algorithm.


This is because the time taken by any algorithm will be proportional to the number of operations it does when it is run on any machine. Therefore, the growth in time will also be proportional to the number of operations that the algorithm performs when it runs on any machine.


When you analyze any algorithm, you don’t need to get an exact number of operations it performs. You can simply approximate the no of operations and calculate the time complexity using that. For example, let’s say for any hypothetical algorithm which operates on an array of size N, it doesn’t matter whether the exact number of operations taken by that algorithm is 4N + 12 or just 3N, the time complexity in both cases will be simply O(N).


  • Measure the time complexity of an algorithm by approximating the number of operations that the algorithm performs.
  • Time complexity of any algorithm is expressed in terms of Big - O Notation


Time Complexity Analysis: Example 1

Let’s understand how to measure the time complexity of an algorithm by taking a simple example in the beginning. As we move forward in the article we will go over more complex examples.


Let’s say we have a simple for loop as given below


int[] arr = new int[n];

for(int i = 0; i < n; i++) {
   print(arr[i]);
}


How many operations are we doing in the above for loop ?


In the above for loop, there are two types of operations :


  1. Operations performed for every iteration of the loop: These are the operations that are performed for every iteration of the for loop. For example, printing is done for each array element , so it is done once per iteration of the for loop. Checking whether i < n and doing i++ is done for every iteration of the for loop. Each of the print, comparison and increment operation take a fixed amount of time each time the loop is run. Therefore, together we can say that they take O(1) time per iteration of the loop. (constant time). Since the loop runs N times , the overall time due to these operations will be O(N)

  2. One-time operations: These are the operations that happen a fixed number of times no matter how many times the loop is run. For example, the statement int i = 0 is declared only once, no matter how many times the loop is run.


We will sum the contribution of each of these types of operations O(N) + O(1) which comes out to be O(N). This is because in front of the operations which runs N times the contribution of the fixed one-time operations can be ignored as it’s very small.


We can apply the same analysis to more different types of loops as shown below


Time Complexity Analysis: Example 2

Let’s take another loop as an example now,


for(int i = 0; i < N && i%2 == 0; i++) {
   print(i);
}


In this example, if you see closely irrespective of the value of N , the loop only runs once (when i = 0. This is because as soon as i becomes 1, the condition i%2 == 0 fails and the loop exists.


Therefore, the number of operations that this loop performs will be same irrespective of the value of N, therefore it has constant time complexity O(1)


When analyzing the time complexity of any loop (for, while, and do-while) two things matter:

  • How many times the loop runs
  • What is the time complexity per iteration of the loop?


What exactly is a Big-O Notation?

The formal mathematical definition of Big O Notation as provided by Wikipedia is pretty complex and is not very useful for us. Let’s understand the Big O Notation in simpler terms.


Big O Notation is just a mathematical way of representing the growth rate in time or space of any algorithm.


For example, if we say that the time complexity of an algorithm is O(N^2) it means that the algorithm’s runtime increases in proportion to the square of the input. If the input size increases by 3 we can expect the algorithm’s runtime to increase by at least 9


Big O Notation can be used for expressing both time and space complexity.


If you have made it this far, Congratulations! In the 2nd part of this series, we will go over the math involving Big-O Notation and see some more complex examples of time complexity.


[Affiliate link] If you are interested in a deeper dive into other patterns and Leetcode Problems, you should definitely check my course on __Mastering Leetcode in Java: Top 130 Most Asked Problems__It’s available on Udemy and is one of the highest-rated courses on problem-solving.

You would definitely enjoy it!