Big O notation and complexity time with NumPy


Learn Big O notation and complexity time

with NumPy

Reading time : 8 min

Last update : February 11th, 2022

The big O notation allows determining the execution time of an algorithm with the input size (number). One can thus find the complexity in time but also in space. One can be tempted to code a function as one wants, but the processing can become extremely slow if it executes a lot of data. The goal is, therefore, to have an approximation of the least worst performance. Here we will see the complexity in time.

We will illustrate the different complexities using loops.

The higher the function on the curve (grows), the slower the execution (red color).

To find a suitable notation, you have to count the number of operations. We do not use constants because they have no influence; the writing must be as simplified as possible.

There are many functions, but we will see the main ones:

  • constant complexity
  • logarithmic complexity
  • linear complexity
  • linear complexity
  • quadratic complexity (polynomial)
  • exponential complexity
  • factorial complexity

They will be presented from the fastest to the slowest.

Half off everything 2manning

$$O(1)$$ : constant complexity

Constant complexity is the growth of the execution time with respect to the input parameter. The growth rate is constant, which means that the function will be a straight line. In code terms, this means that the function will always return the same value regardless of the size of the input. The execution is, therefore, very fast because it is unique.

The input size does not matter; we could put in input the number 3000, there will be only one execution, the one that returns the number 3000.

Here we create a function that takes ten as a parameter and returns that number immediately.

The character needs a watch right away (only one execution), so the salesman sells him one already in his store. It is a fast, inexpensive service.


1def constant_o(num):
2    return num
4print(constant_o(10)) # 10
6# ============== #
8plt.axhline(0, label='$O(1)$')
10plt.axis([-1, 6, -1, 6])
17plt.title('Constant', fontsize=12)
18plt.savefig('constant.png', bbox_inches='tight')

Time complexity : constant

Constant function plot


$$ O(log(n))$$ : logarithmic complexity

The logarithmic complexity corresponds to the growth of the execution time with respect to the logarithm of the input parameter. The presence of the logarithm grows quickly initially; when the numbers are small, then becomes almost linear when arriving on large numbers, it slows down. It is the inverse of the exponential function.

The interest is to use large numbers while having a low execution rate, thus very fast.

Here we will implement a for loop, and in the incrementing part, we will add steps, so we multiply by two, so the execution will skip some steps, as we can see on the graph. In the chart, we see the result that will be returned (256, 9).


1function logarithmic(num) {
2  for (i = 1; i <= num; i *= 2) {
3    console.log(i);
4  }
7logarithmic(256); // 1 2 4 8 16 32 64 128 256

Time complexity : logarithmic

Logarithmic function


$$O(n)$$ : linear complexity

The linear complexity corresponds to the growth of the execution time, which is proportional to the size of the input parameter. The curve is linear, which means that it will grow constantly. This function is the identity function. It is a simple loop.

Here, for example, we want to execute the loop ten times.

In the comic strip, the repair time is proportional to the type of repair. So a simple repair will be done in a few days, for example two, three days while a more complex repair will be done in several weeks.


1function linear(num) {
2  for (let i = 0; i < num; i++) {
3    console.log(i);
4  }

Time complexity : linear

Linear function


$$O(n log(n))$$ : linearithmic complexity

The linear complexity corresponds to the growth of the execution time with respect to the input parameter multiplied by the logarithm of this same parameter. It is the logarithm of base two that we will multiply by n. It can be represented by two nested loops : the parent loop is linear and the child loop is logarithmic. The function also grows approximately linearly because of the parent.

Here we will calculate 4. The logarithm of base 2 of 4 is 4 because 2 * 2 = 4.


1function linearithmic(num) {
2  for (let i = 1; i <= num; i++) { 
3    for (let j = 1; j < num; j *= 2) {
4      console.log(i, j)
5    }
6  }

Time complexity : linearithmic

Linearithmic function


$$O(n^2)$$ : quadratic complexity / polynomial complexity $$O(n^p)$$

From this section onwards, the point is to be aware that you can only use small numbers because this function produces a significant and, therefore, very slow execution rate.

The quadratic complexity (n²) corresponds to the growth of the execution time compared to the square of the input parameter. It is a function that will grow very fast because the operations are multiplied by themselves (power of 2). The execution time will therefore be slow. It corresponds to a nested loop. It is part of the polynomial functions with other powers like cubic (n³), quartic (n⁴), etc. The higher you go in the powers, the slower the function is because more operations are performed. At each nesting, we add power.

Here we calculate 4 with different levels of nesting, which corresponds to the power.

The character needs a custom made clock but it will take time, two months (several executions for one object)


1function quadratic(num) {
2  for (let i = 1; i <= num; i++) { 
3    for (let j = 1; j <= num; j++) {
4      console.log(i, j)
5    }
6  }
9console.log(quadratic(4)); // 16 executions
11function cubic(num) {
12  for (let i = 1; i <= num; i++) { 
13    for (let j = 1; j <= num; j++) {
14      for (let k = 1; k <= num; k++) {
15        console.log(i, j, k)
16      }
17    }
18  }
21console.log(cubic(4)); // 64 executions
23function quadric(num) {
24  for (let i = 1; i <= num; i++) { 
25    for (let j = 1; j <= num; j++) {
26      for (let k = 1; k <= num; k++) {
27        for (let l = 1; l <= num; l++) {
28          console.log(i, j, k, l)
29        }
30      }
31    }
32  }
35console.log(quadric(4)); // 256 executions

Time complexity : quadratic

Quadratic function


$$O(2^n)$$: exponential complexity

The exponential complexity corresponds to the growth of the execution time with respect to a factor (constant) that has as exponent the size of the input parameter. The execution time of this type of function is even slower because the operations will be multiplied by the exponential of the factor. This can correspond to a recursive function. As for the polynomial function, the constant can be changed, for example, $$3^n$$, $$4^n$$, etc. And also, the execution will be slower because there will be more operations.

To illustrate this complexity, we can use a function that calculates the power of a number.


1function power (base, exponent) {
2  if (exponent === 0) {
3    return 1
4  }
6  return base * power(base, exponent - 1)
9console.log(power(2, 6)); // 64
10console.log(power(3, 6)); // 729

Time complexity : exponential

Exponential function


$$O(n!)$$ : factorial complexity

Finally, Factorial complexity is the growth in execution time relative to the factorial of the input parameter. The factorial is the slowest complexity. Indeed the factorial in mathematics corresponds to a recursive multiplication. The execution time is proportional to the factorial of the input parameter, which corresponds very quickly to huge numbers. For example for 6! = 6 * 5 * 4 * 3 * 2 the result (720) corresponds to the total number of executions.

720 is very large; one can imagine that it takes time. So if we want to calculate the double, 12, we arrive at hundreds of millions (479 001 600 to be exact). This is huge! It is, therefore, wise to use a function that uses a faster complexity.

To illustrate this complexity, we create a factorial function that we call in the for loop (condition part).

This is the traveling salesman problem. The character answers three paths because, first of all, we know that we can choose any city of departure, so we subtract it from the calculation (4 - 1 = 3). Then he talks about possible paths, so it is about probability which is calculated with the factorial (3! -> 3 x 2). Then, finally, he calculates the average because we can go through the cities in one direction and its opposite, so it is the same, knowing it is the same length.


1function factorial(num) {
2  if (num === 1) {
3    return 1;
4  }
6  return num * factorial(num - 1);
9function exponential(num) {
10  for (let i = 1; i <= factorial(num); i++) { 
11    console.log(i)
12  }
15console.log(exponential(6)); // 720
Factorial function



We can compare with Matplotlib the different complexities, from the best to the worst (fastest to the slowest). In red, we find the slowest complexity (factorial), and in green the fastest complexity (constant)


1x = np.linspace(-6,6, 100)
3plt.axhline(0, label='$O(1)$', color='g')
4plt.plot(x, log, label='$O(log(n))$', color='yellow')
5plt.plot(x, l, label='$O(n)$', color='orange') # identité
6plt.plot(x, q, label='$O(n^2)$', color='red')
7plt.plot(x, l_g, label='$O(nlog(n))$', color='brown')
8plt.plot(x, np.array(e) * 2, label='$O(2^n)$', color='maroon')
9plt.plot(x, g, label='$O(n!)$', color='darkred')
17plt.title('Big O',fontsize=12)
18plt.savefig('bigo.png', bbox_inches='tight')

Time complexity : factorial

Differents complexities  of Big O plot
Print booksmanning

The Big O notation is helpful to calculate the execution of a function. It determines the complexity in time and space. It is essential in order to make the best choice with respect to the size of the data.

In the end, there are three main categories: many executions much smaller than the input size, several executions that follow more or less the input size, and finally, several executions much larger than the input size


Source code