By now, you’ve probably heard that Big O notation is used to measure how algorithm’s time and space increase based on input size. It’s very nice approach to optimize resource utilization. But why is that still important? Modern hardware host powerful CPUs with times more RAM and disk space than 10 or even 5 years ago. Any decent system configuration, especially production environment, will surely have enough resources to run any algorithm.

So why do we need to be concerned with CPU and Memory usage today?

Skipping the obvious fact that it requires money answer is quite easy: because modern hardware may be supercharged in comparison to 10 years ago, but it now has to deal with much bigger sets of data than before. Time is an expensive requirement and one way to satisfy it is to write efficient algorithms that won’t leave the user hanging on the other side of the app.

OK, but how to measure algorithm efficiency?

Timing the execution can be seen as the answer, but seconds and minutes will vary depending on static factors like hardware, external services, and connection speed, making timing good approach only when used against the same environment. Which is doable of course, but has one big downside – it requires an actual environment. There has to be a way to measure efficiency that doesn’t require hardware.

**First example:**

public void firstExample( Integer[] intArray ) { int additionFactor = 1; int multiplyFactor = 2; int divisionFactor = 3; int substractionFactor = 4; additionFactor += Math.random(); multiplyFactor += Math.random(); divisionFactor += Math.random(); substractionFactor += Math.random(); Integer multiplyResult = null; Integer additionResult = null; Integer divisionResult = null; Integer substractionResult = null; for ( int i = 0; i &amp;lt; intArray.length; i++ ) { Integer currentElement = intArray[i]; System.out.println( String.format( "Value before multiplication: %d", currentElement.intValue() ) ); multiplyResult = currentElement * multiplyFactor; System.out.println( String.format( "Value after multiplication: %d", multiplyResult.intValue() ) ); System.out.println( String.format( "Value before addition: %d", currentElement.intValue() ) ); additionResult = currentElement + additionFactor; System.out.println( String.format( "Value after addition: %d", additionResult.intValue() ) ); System.out.println( String.format( "Value before division: %d", currentElement.intValue() ) ); divisionResult = currentElement / divisionFactor; System.out.println( String.format( "Value after division: %d", divisionResult.intValue() ) ); System.out.println( String.format( "Value before substraction: %d", currentElement.intValue() ) ); substractionResult = currentElement - substractionFactor; System.out.println( String.format( "Value after substraction: %d", substractionResult.intValue() ) ); } Integer maxValue = Math.max( additionResult, substractionResult ); maxValue = Math.max( substractionResult, multiplyResult ); maxValue = Math.max( multiplyResult, divisionResult ); System.out.println( String.format( "Max value is: %d", maxValue ) ); }

**Second example:**

public void secondExample( Integer[] intArray ) { for ( int i = 0; i < intArray.length; i++ ) { for ( int j = 0; j < intArray.length; j++ ) { System.out.println( intArray[i] ); } } }

And now the big questios: which of the upper examples will run more efficient?

Don’t worry if you don’t have the answer now, we will return to it later. See answer ↓

That is were the Big O notation comes in. It measures how long in terms of processing time or working space will it take to complete an algorithm based on the size of input parameters. Or in other words: it’s a way to measure how well a computer algorithm scales as the amount of processing data increases. And since this is a theoretical approach the measuring is done by just looking at the code, which elegantly removes all static distractions like hardware, connection speed, etc. So the Big O Notation is a *nice-to-have* tool in your arsenal.

#### Big O definition

from xlinux.nist.gov: *A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items…*

What that means is that Big O can be used to measure two things:

- time complexity: CPU time needed to complete the function;
- space complexity: memory space needed to complete the function;

Also the measurement is refereed to as theoretical, because it will not give exact time estimate for all operations that will be performed. Instead, it gives relation of number of operations to size of the input.

**O( 1 )**

If the algorithm always perform the same number of operations regardless of input size we say that it takes **constant** time.

public void printFirstElement( int[] array ) { System.out.println( array[0] ); }

**O( log ( n ) )**

If the algorithm perform operations on only some of the input elements and not on all of them, then we say that it takes **logarithmic** time, where **n** is the size of the input. The binary search is great example for that, because it will split the size of the array by half each time, thus skipping most elements.

public void binarySearchForValue( int[] array, int value ) { int lowIndex = 0; int highIndex = array.length - 1; while ( lowIndex <= highIndex ) { int middleIndex = ( highIndex + lowIndex ) / 2; if ( array[middleIndex] < value ) { lowIndex = middleIndex + 1; } else if ( array[middleIndex] > value ) { highIndex = middleIndex - 1; } else { System.out.println( "\nFound a Match for " + value + " at Index " + middleIndex ); lowIndex = highIndex + 1; } } }

**O( n )**

If the algorithm perform operation on every element of the input we say that it takes **linear** time, where **n** is the size of the input. Very good example here will be any linear search algorithm because it will iterate thru all elements in the array precisely once.

public void linearSearchForValue( int[] array, int value ) { boolean valueInArray = false; String indexsWithValue = ""; for ( int i = 0; i < array.length; i++ ) { if ( array[i] == value ) { valueInArray = true; indexsWithValue += i + " "; } } if ( valueInArray ) { System.out.println( "Value found at: " + indexsWithValue ); } }

**O( n² )**

If the algorithm perform **n** operations on every element of the input, resulting in **n²**, then we say that it takes **quadratic** time, where **n** is the size of the input. This result is generally accepted as **bad** performance! The bubble sort algorithm is the right example here because it contains two loops. If each loop has tome complexity O(n), then total will be O(n²) since bubble sort will always go through all elements of its arrays.

public void bubbleSort( int[] array ) { for ( int i = array.length - 1; i < 1; i-- ) { for ( int j = 0; j < i; j++ ) { if ( array[j] < array[j + 1] ) { swapValues( array, j, j + 1 ); } } } } public void swapValues( int[] array, int indexOne, int indexTwo ) { int temp = array[indexOne]; array[indexOne] = array[indexTwo]; array[indexTwo] = temp; }

Big O notation has more metrics, although most of them are rarely used in mainstream programming. Here is somewhat extensive list:

- O(1) – constant
- O(log(n)) – logarithmic
- O((log(n)) c ) – poly-logarithmic
- O(n) – linear
- O(n² ) – quadratic
- O(n c ) – polynomial
- O(c n ) – exponential

You should keep your code away from O(n²) or any of the results down that tree, like O(n c) and O(c n) because they produce extremely bad efficiency. |

#### Ignore the constants

When it comes to measuring CPU efficiency Big O notation is the right tool for the job because of the way it calculates complexity: the time required by a function is proportional to the number of “basic operations” that it performs.

Example basic operations would be:

- arithmetic operation (e.g., +, *)
- assignment (e.g. x := 0)
- test operator (e.g., x = 0)
- read (of a primitive type: integer, float, character, boolean)
- write (of a primitive type: integer, float, character, boolean)

Let say **n** is the length of the input collection and your code will perform 50 operations on **n** plus 30 more on **n²** and finally 20 simple operations. The arithmetics will look like this:

50n + 30n² + 20 = ?

if **n == 1**

50 + 30 + 20 = 100, so far so good, the static 20 operations still leave an impact on the overall performance.

if **n == 2**

100 + 120 + 20 = 230, the static 20 operations are no longer a major factor.

if **n == 5**

250 + 8000 + 10 = 8270, at this point you can see that any static operation become insignificant and can be safely ignored.

Ignoring the constants is important rule when using Big O Notation! |

#### Measuring Time Complexity

### In case of sequence of **statements**:

statement 1;

statement 2;

… statement k;

The total time is found by adding the times for all statements:

total time = time(statement 1) + time(statement 2) + … + time(statement k).

If each statement is “simple” (only involves basic operations) then the time for each statement is constant and the total time is also constant: O(1).

### In case of **If-then-else**:

if (condition) then

block 1 (sequence of statements)

else

block 2 (sequence of statements)

end if;

Here, either block 1 will execute, or block 2 will execute. Therefore, the worst-case time is the slower of the two possibilities:

- max(time(block 1), time(block 2))
- If block 1 takes O(1) and block 2 takes O(N), the if-then-else statement would be O(N).

### In case of **Loops**:

for I in 1 .. N loop

sequence of statements

end loop;

The loop executes N times, so the sequence of statements also executes N times. If we assume the statements are O(1), the total time for the for loop is N * O(1), which is O(N) overall.

### In case of **Nested loops**:

for I in 1 .. N loop

for J in 1 .. M loop

sequence of statements

end loop;

end loop;

The outer loop executes N times. Every time the outer loop executes, the inner loop executes M times. As a result, the statements in the inner loop execute a total of N * M times. Thus, the complexity is O(N * M).

In a common special case where the stopping condition of the inner loop is J < N instead of J < M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N 2 ).

#### Measuring space complexity

Space complexity means memory efficiency and is measured in the same way as time complexity, but instead of counting how many operations are performed, space complexity counts how many objects are getting allocated.

Only the newly created objects are counted. So any already existing objects like input parameters or global variables are not included. |

Usually time and space complexities are locked in a trade off, so concentrating on one will probably undermine the other. The best approach will be a balance between both, though I personally prefer saving time over saving memory for two reasons:

- Modern hardware provides enough memory;
- Less variables means less readable code;

Let me know what you think about that in the comments.

**O( 1 )**

No matter the size of the input the number of newly created objects will be constant:

public void printFirstelement( Integer[] intArray ) { Integer currentElement = intArray[0]; System.out.println( currentElement ); }

**O( n )**

Newly created objects count will grow proportionally as the input size:

public void printAllElements( Integer[] intArray ) { for ( int i = 0; i < intArray.length; i++ ) { Integer currentElement = intArray[i]; System.out.println( currentElement ); } }

I am not going to cover all notation cases here, because you can probably by now guessed how they will work.

#### Answer

Finally, to get back to the initial question with First and Second examples at the start of this page. See examples ↑

Same question as before: which example will run more efficient?

This time however, you can cleverly answer: “In terms of time or space complexity?”. Well, let’s have both.

Now that you know how to calculate time and space complexity it’s easy to see that the first example will be more efficient from CPU time point of view, because it has only one loop. So no matter how big of array is passed in, it will be iterated only once giving in O(n). And of course, when using Big O all simple operations are ignored. Unlike the second example where the passed-in array will be iterated twice resulting in O(n²). As I already discussed it may not make much difference for 5 element, but it will for 5000, or even 5000000000, and so on.

On the other hand the second example is more memory space efficient, because it creates less new objects than the first example thus requiring less memory to execute.

#### Actual Usage

From my point of view, Big O is not used much in actual development work, which is a big shame. I have 8 years of Java development behind me and the only place that Big O ever came in handy unfortunately was interview questions. And that is the reason for which I earlier use the term *nice-to-have* in your arsenal.

As a matter of fact I will ask you to participate in the below pool, so everyone can get an idea of the actual usage of Big O notation in everyday development work.

#### Links & Sources

Useful MIT document on Big O basics;

Excellent explanation from InterviewCake;

Great video with examples from Derek Banas’s YouTube channel;