We see algorithms used in pretty much every program that’s larger than a college project. These algorithms sometimes power necessary functionality inside these programs. That’s why it is essential to make sure that they are running efficiently. Even in situations where execution speed doesn’t matter much, it feels good to know that you are not coding a highly inefficient algorithm implementation.
Before we begin this article, it is essential to note that memory usage is just as crucial as its runtime. I think this bites new programmers more often than speed inefficiencies because the consequences of exhausting system memory or using so much that there isn’t a reasonable amount left for other running processes can be fatal. By contrast, the only setback people suffer from a slow program is just time wasted waiting.
Runtime and memory usage are measured using a metric called complexity portrayed in Big-O notation, simply the letter “O” followed by an expression in brackets. There are several different variants of Big-O, but by far, the O representation is the most commonly used. Inside the parentheses is a variable such as n, representing the input size to the algorithm. Of course, it is possible to use more than one variable inside the parentheses to define other factors and constants.
Specifically, O(n) means that the algorithm runtime or memory usage approaches n units as the input size n itself approaches infinity. The words used in the definition are very important. Big-O means that we equal an equal or shorter time or space measure at infinite input size. Little-O o(n) means the function time or space is strictly less than the complexity expression. While Big-Theta Θ(n) notation means it exactly equals the complexity. You might as see people using exclusively Big-O, but having three different expressions, each referring to best case, average, and worst-case complexities. I think that’s an easier way to understand this concept.
For all the above, the measures of functions are still valid if the complexity expression is multiplied by an unknown constant, in other words. O-notation doesn’t care whether the complexity is n or 30n because the complexity in both cases increases honestly.
This brings us to the complexity expressions themselves.
Here are the complexities you will see most often with Big-O notation, with some examples of each:
Clearly, from the above list, you can see that exponential complexity is the worst possible runtime or space usage, while constant runtime is the best. Everyone wants to use algorithms that have constant complexity, but in most cases, this is not possible because there are no known implementations of those algorithms that have that simple complexity.
Generally speaking, each algorithm has a minimum complexity which corresponds to the most efficient known implementation of that algorithm.
So, that’s all about algorithmic complexity. Thanks for reading.
Also published at https://medium.com/@zenul_abidin/algorithmic-complexity-basics-8c6a2aef54c2