How do we know that sorting is intrisincally an n lg n problem? How do we know that we can't do better with a sufficiently ingenious algorithm? We are working under the assumption that the only mechanism for putting elements in order is the comparison of two elements. Imagine sorting to be selecting the one and only correct permutation from all possible permutations of n elements. How many ways are there to arrange the elements? n! = n(n-1)(n-2)(n-3)...(3)(2)(1). 5! = 5x4x3x2x1 = 120. 6! = 720. At best, we can with a single comparison cut the number in half. So, n! / 2^k = 1 2^k = n! k = lg (n!). We need lg (n!) comparisons--minimum to sort n items. But what is lg (n!)? lg (n!) = lg ( n(n-1)(n-2)(n-3)...(3)(2)(1) ) = lg n + lg(n-1) + lg(n-2) + lg(n-3) + ... lg(3) + lg(2) + lg(1). If you have a recursive algorithm, and the running time looks something like: T(n) = aT(n/b) + f(n). This means that the non-recursive part of the algorithm runs in f(n) time. The algorithm makes a recursive calls at each step, and each level of recursion is over 1/b of the elements. For example, merge sort: T(n) = 2T(n/2) + n <== recurrence relation Compare n^(log_b(a)) and f(n). The running time of your algorithm is whichever is slower. If they're the same, then the running time is that thing multiplied by lg (n). Looking at merge sort: a = 2 and b = 2. lg_2 (2) = 1. Compare n^1 and n, which is slower. They are the same, running time is Theta (n lg n). T (n) = 3T (n/2) + n n^log_2 (3) and n. n^1.58 vs. n. We take what is slower and decide that the running time is Theta (n^log_2(3)) or Theta (n^1.58). Look at binary search. Binary search uses one recursive call, and it uses recursion on half the data. The recursive part runs in constant time. T(n) = 1 T (n/2) + 1 n^(log_2(1)) 1 n^0 1 1 1 lg(n). Runtime of Binary Search is lg(n).