# What does the time complexity O(log n) actually mean?

Knowing the complexity of algorithms beforehand is one thing, and other thing is knowing the reason behind it being like that.

Whether you’re a CS graduate or someone who wants to deal with optimization problems effectively, this is something that you must understand if you want to use your knowledge for solving actual problems.

Complexities like O(1) and O(n) are simple and straightforward. O(1) means an operation which is done to reach an element directly (like a dictionary or hash table), O(n) means first we would have to search it by checking n elements, but what could O(log n) possibly mean?

One place where you might have heard about O(log n) time complexity the first time is Binary search algorithm. So there must be some type of behavior that algorithm is showing to be given a complexity of log n. Let us see how it works.

Since binary search has a best case efficiency of O(1) and worst case (average case) efficiency of O(log n), we will look at an example of the worst case. Consider a sorted array of 16 elements.

For the worst case, let us say we want to search for the the number 13.

You can see that after every comparison with the middle term, our searching range gets divided into half of the current range.

So, for reaching one element from a set of 16 elements, we had to divide the array 4 times,

We can say that,

Similarly, for n elements,

Now, let us look at the definition of logarithm, it says that

A quantity representing the power to which a fixed number (the base) must be raised to produce a given number.

Which makes our equation into 