What is Big O?

Prathamesh More
5 min readMar 8, 2020

--

This is such an important concept. Big O time is the language and metric we use to describe the efficiency of algorithms. Master this concept for interviews and as a programmer.

Photo by Nicole Wolf on Unsplash

Let imagine

You’ve got a file on a hard drive and you need to send it to your friend who lives in a different city. You want to send a file to your friend as fast as possible, How should you send it?

Most people would thought be email, FTP or any electronic transfer, That is right but not fully correct.

If it’s a small file then it’s okay, It would take 5–10 hours to get an airport, hop on a flight, and then deliver it to your friend.

What if the file size is really large. Is it possible that it’s faster to physically it via plane?

Yes, actually it is. 1TB file could take more than a day to transfer electronically transfer. It would much faster to send via plane.

What if there is no flight, cars and transfer file is important for you?

Time complexity

This is the concept of asymptotic runtime or big O time.

  1. Electronic Transfer: O(s), where s is the size of the file, as file size increases, time also increases linearly to transfer.
  2. Airplane Transfer: O(1), as file size increases, it won’t take any longer time. Time will the same for all sizes of files.
Big O Notation

Some common runtimes ones are O (1), O (log N), O (N log N), O (N), O (N²) and O (2^N).

In our academics use big O, big Θ, and big Ω to describe runtimes.

In an industry (in interviews) only big O used.

Best Case, Worst Case, and Expected Case.

Runtime for the algorithm described in three different ways.

Best Case, Expected Case, and Worst-Case respectively.

Suppose if we got an array of numbers, where numbers are sorted ascending order.

Best case: If we find an element immediately, without an iterating whole array. That will be return very quickly. This is O (N).

Expected case: Usually this situation won’t happen. We except a runtime O (N log N).

Worst case: What if unluckily element found at the end of the array. We except a runtime O (N²).

We rarely ever discuss best case time complexity because it’s not a useful concept.

There is no relation between best/worst/expected and big O/theta/omega.

Space Complexity

Time is not the only thing that matters in an algorithm. Space also. I am not going deep into space complexity.

If we need to create an array of size n, this will require O (N) space. If the array is two dimensional, this will require O (N * N) i.e. O (N²).

Rate of increase.

Drop the constants.

It is very possible for O (N) code to run faster than the O (1) code for specific inputs. Big O just describes the rate of increase.

We drop constants in runtime. An algorithm that one might have described as O (2N) is actually O (N).

Examples :

  1. O (500) = O (1)
  2. O (13 N²) = O (N²)
  3. O ( N + 10) = O (N)
  4. O (1000N + 50) = O (N)
  5. (N² + 5N + 8) = O(N²)

Always remember,

Arithmetic operations are constants.

e.g. C = A + B

O (1)

Veriable assigments are constants.

e.g. int a = 2;

O (1)

Accessing elements in array or object is constant.

In loop, the complexity is the length of the loop times. The complexity of whetever happens inside of the loop.

var i;
for (i = 0; i < N; i++) {
console.log(i);
}

Time complexity : O (N)

Drop the Non-dominant Terms

What do you about an expression such as O (N² + N)? That second N isn’t exactly a constant. But it’s not especially important.

We already said that we drop constants. Therefore, O(N² + N²) would be O (N²).

You should drop the non-dominant terms.

O (N² + N) becomes O (N²).

O (N + log N) becomes O (N).

O(5*2^N + 1000N¹⁰⁰) becomes O (2^N).

Multi-Part algorithm

Suppose we have an algorithm that has two steps. When we do multiply the runtime and when do you add them?

Add the runtime: O (A + B)

for (int a : arrA) {
print(a);
}
for (int b : arrB) {
print(b);
}

Multiply the runtime: (A * B)

for (int a : arrA) {
for (int b : arrB) {
print(a + "," + b);
}
}

Tips :

If the algorithm is in the form “do this, then you are all done, do that” then you add runtimes.

if the algorithm is in the form “do this for each time you do that” then you multiply the runtime.

Log N Runtimes

We commonly see O (log N) in runtimes. Where does this come from?

When you see a problem where the number of elements in the problem space gets halved or multiplying each time, that will likely be a O (log N) runtime.

e.g.

for (int i = 0; i < N; i = i * 2) {
//do something
}
Here Assume i >= N
1 * 2 = 2
2 * 2 = 2^2
3 * 2 = 2^3 ⁂ i = 2^K
.
. ⁂ 2^K >= N
. 2^K = N
2^K K = log N
for (int i = 0; i < N; i = i / 2) {
//do something
}
Here Assume N = 16
On each iteration N is divided by 2
N = 16
N = 8 /* Divide by 2 */
N = 4 /* Divide by 2 */
N = 2 /* Divide by 2 */
N = 1 /* Divide by 2 */

More learning resources online:

https://www.geeksforgeeks.org/understanding-time-complexity-simple-examples/

https://www.youtube.com/watch?v=lj3E24nnPjI

https://www.youtube.com/watch?v=9TlHvipP5yA

--

--

Prathamesh More
Prathamesh More

Written by Prathamesh More

Passionate, Self-taught Full Stack Web Developer that loves designing and building modern websites, applications, and APIs.

No responses yet