Why are Logarithms used in Time Complexity?

Sara.Sharma
3 min readJan 3, 2022

Wait, why logarithms at all? I am sure if not all, but most of you might have come across logarithms just to realise that you don’t really understand how they work or why exactly are they used in the first place.

Prerequisites:

  1. basic understanding of exponents
  2. basic understanding of Big O Notation
Image Source

As daunting as they might seem, logarithms can be very helpful.

  • A logarithmic scale is used to measure the apparent brightness of a star when observed from Earth.
  • Another scale called the Richter Scale is a base-ten logarithmic scale that is used to measure an earthquake’s magnitude.
  • For all the football fans out there, let’s say you want to analyse the correlation between revenue generated (which we know has risen tremendously over the last decade or so) and the club position per season for a certain EPL club for last 50 seasons. It is advisable to use a logarithmic scale to represent both measures and draw a final conclusion for your analysis.

Logarithms are widely used when we want to represent data involving huge range of measure values. Remember: Large numbers can be easily plotted on a number line using logarithms!

The logarithm of a positive real number x with respect to base b is the exponent by which b must be raised to yield x. In other words, the logarithm of x to base b is the unique real number y such that bʸ = x. Wikipedia

Okay, let’s take a pause to understand what the definition quoted above really means.

Mathematically, logₓ N= y iff (if and only if) N= xʸ

When talking about time complexity, we assume that the base of the logarithm is often 2, but not always! This is because most of our divide-and-conquer algorithms divide an array of length (n) into two at each step.

Let’s try to understand why is it so important to use logarithms w.r.t. time complexity.

Big O Time Complexity Graph. Credit: Huang, D. (2018, January 1). Javascript — Algorithm

The idea behind time complexity (or using big O notation) is to provide an estimate of time taken by an algorithm to perform an operation/task in the worst case scenario.

We will use the classical example of Binary Search algorithm to make sense of this concept.

Consider an array of length 16. Every time you try to search for a particular element in this array, you try to divide the array into two equal halves. If the element is present in the left half of the array, you disregard the right part completely and vice vera. You keep on repeating the same process until the search interval is empty. If you’ve found your element, voila!; else the element does not exist in the array.

Mathematically,

log₂ 16 = y

=> 2⁴ = 2ʸ

=> y = 4

i.e., you can find your element from the 16 elements in the array in just 4 steps/operations.

When we contemplate the above example, considering that the base of the logarithm is 2, we are essentially saying that if the size of array/input (N) is doubled i.e., N = 32, we are increasing the number of steps/operations (y) by only 1.

log₂ 32 = y

=> 2⁵ = 2ʸ

=> y = 5

And when, let’s say, the size of input (N) = 1024; y becomes 10 i.e. 2¹⁰= 1024

Similarly when N = 1048576; y increases only by another 10 i.e., 2²⁰= 1048576 (talk about big numbers!)

Conclusion: With a time complexity of O(log N), every time you double the size of your input/data, you increase only one step of operation, thus improving the performance of your algorithm.

Thanks for reading!

--

--