Below are some examples of two nested loops where i and j are the counter variables for the outer and inner loop respectively. To help you develop some intuition for the time complexity of these examples, we graphically represent the operation of the loops on a 2D array. i and j correspond to the coordinates of a cell on the array: (i,j) = (1,1) is the upper left cell, and i increases to the right, j increases down.
As the body of the inner loop "visits" different cells on the array, we shape in those cells. If each visit takes a constant amount of time, the time complexity of the loops will correspond to the area of the array that is covered.
Suggested reading: Shaffer, section 3.5
| Code fragment | Cells visited on an n x n array (outer/inner loops correspond to columns/rows respectively) | Time complexity |
|---|---|---|
for ( i = 1; i <= n; ++i )
for ( j = 1; j <= n; ++j )
printf(...);
|
| Theta(n^2) |
for ( i = 1; i <= n; ++i )
if ( i <= m ) {
for ( j = 1; j <= n; ++j )
printf(...);
}
|
| Theta(n*min{n,m})
This is one example of how complexity can be a function of more than just one variable. |
b = n;
for ( i = 1; i <= n; ++i ) {
for ( j = 1; j <= n; ++j )
--n;
n = b;
}
|
| Theta(n*(n/2)) = Theta(n^2) |
for ( i = 1; i <= n; ++i )
for ( j = 1; j <= i; ++j )
printf(...);
|
| Theta(1+2+3+...+n) = Theta(n*(n+1)/2) = Theta(n^2) |
b = n;
for ( i = 1; i <= b; ++i )
for ( j = 1; j <= n; ++j )
--n;
|
Here we have the area under an exponential curve! | Theta(n/2+n/4+n/8+n/16+...) = Theta(n) |
for ( i = 1; i <= n; ++i )
for ( j = 1; j <= n; ++j )
--n;
|
Unlike the previous example, here the exponential curve is cut off early. | Theta(n/2+n/4+n/8+n/16+...) = O(n)
Can a better bound be found ? I suspect the tightest bound you can get is something like O((1-0.5^k)*n), where k is such that n=k*2^k. |
for ( i = 1; i <= n; i <<= 1 )
for ( j = 1; j <= n; ++j )
printf(...);
Note that "i <<= 1" is just another way of writing "i *= 2". Which way of writing it is more efficient? And which way should we write it, assuming we have a good optimizing compiler? |
| Theta(n*log(n)) |
for ( i = 1; i <= n; i *= 2 )
for ( j = 1; j <= i; ++j )
printf(...);
|
| Theta(1+2+4+8+...+n) = Theta(2*n-1) = Theta(n) |
for ( i = 1; i <= n; ++i )
for ( j = i; j > 0; j &= j - 1 )
printf(...);
What is the effect of "j &= j-1" ?
This produces a self-similar, fractal pattern!
|