Big-O: Algorithm analysis explained

Mindo Ndirangu
3 min readJul 23, 2021

Algorithms are an essential part of computer science because they can be studied and understood in a machine-independent manner. As a developer, understanding how to analyse algorithms is just as important as writing the algorithms and as Steve Yegge, “[Get that job at Google]” aptly puts it:

Algorithm Complexity: you need to know Big-O. It’s a must. If you struggle with basic big-O complexity analysis, then you are almost guaranteed not to get hired.

What is Big-Oh analysis in Computer Science?

Big O notation is used to describe the complexity of an algorithm when measuring its efficiency, which is how well the algorithm scales with the size of the dataset. It uses the syntax of O(n).

Why is Big-Oh used in algorithm analysis?

The reason why we use this form of notation is that when otherwise we could ask how much time does it take to run this function, the answer to this is dependent on several factors such as choice of programming language, type of processor, the speed of accessing memory etc. There is therefore a need to have some standard form of notation that is simple enough to describe the complexity without the nitty-gritty details but it should also retain the correct solution for an algorithm. This is why we say algorithms can be studied and understood in a machine-independent manner.

How is Big-Oh analysis accomplished?

A great model for thinking about Big-Oh analysis is the RAM model described by Steven S. Skienna in The Algorithm Design Manual.

R.A.M in this case stands for Random Access Machine. It is a model for thinking about how computers operate. It has the following characteristics

  1. Simple operations such as = + — take one step.
  2. Loops and subroutines are not considered simple steps and actually depend on the loop steps and the nature of the subroutine.
  3. Each memory access is exactly one step.

Once again this is just but a model and there are plenty of arguments to show why it cannot accurately represent how computer systems work. But this model is perfect for our analysis in the same way an architect can assume the earth is flat when planning out for a new building. The earth is in fact spherical (sorta) but using spherical earth for making buildings won’t do much good.

Examples

Using the properties of our R.A.M model, consider a single function that takes an array of items as input and has no output.

Constant Complexity

Consider getting the length of the items in the array. In this type, the method takes the same amount of time to complete irrespective of the input size. The array would have 100k items or just one but this function would just require one step, i.e returning the length of the array. This is what is termed as constant complexity.

function someFunction(arr) {
return arr.length;
}

Linear complexity

Still remaining with the array, we can find linear runtime if we perform some action on each of the inputs in the array. An example would be to use a for loop or via recursion.

function someOtherFunction(arr) {
for(let i = 0; i < arr.length; i ++ ) {
return arr[i];
}
}

Quadratic complexity

This is best explained by using two nested for loops in that for every action that happens due to the outer loop there is some action taking place with the inner loop for every item in the array.

function someOtherFunction(arr) {
for(let i = 0; i < arr.length; i ++ ) {
for( let j = i + 1; j < arr.length; j ++) {
return i * j ;
}
}
}

Logarithmic Complexity

This complexity is considered to be fairly efficient. The time taken increases with the size of the data but not proportionally. The algorithm of this complexity takes longer per item on smaller datasets relative to larger ones.

Conclusion

That summarizes Big-O analysis. Thank you for reading this far.

--

--

Mindo Ndirangu

Software Developer. Lover of books and fun activities. Funny man at times.