Why we are happy with \theta(n \log(n)) sorting.

Sorting is a very important problem in computer science. A well known conclusion is that good sorting algorithms run with a time-complexity of \theta(n \log(n)). That is if the sorting is based on pair-wise comparison of objects. The neat thing is that we can proof that any sorting algorithm based on pair-wise comparisons runs in \Omega(n \log(n)) time.

When we say “based only on pair-wise comparison of objects” we mean that we do not use the values of the objects directly. We only use a comparator, a function of the form: f_\lt(a,b) = \begin{cases} +1 & if ~ a \ge b \\ -1 & else\end{cases} the same conclusions hold if the comparator gives a special value if a = b.

## Proof

Consider an arbitrary list a = [a_1,a_2, \dots, a_n] for witch every a_i is comparable to every other a_j for all i and j. We know that there are n! possible permutations of the list.

A program that sorts arbitrary lists, should be able to apply any of the n! permutations of the list. If this would not be the case, one could easily construct a list that the algorithm can’t sort. Namely the result of applying the inverse permutation to [1,2, \dots, n].

Note that the permutation that the algorithm can do depend only on the values of the list, since this is the only input. our program can only gather information from the list by doing pair-wise comparisons. Let’s call the number of comparisons the the algorithm does n_c. Given n_c comparisons, we can have 2^{n_c} different outcomes. There are n! possible permutations that the algorithm should do. Thus we need:

or by taking the \log_2 of both sides (note that both sides are positive integers):

Let’s look at \log_2(n!) a little closer:

We make a lower bound by cutting off the terms after \log_2\lceil n/2\rceil:

Thus we have:

Or n_c = \Omega(n\log_2(n))