Asymptotic Notation: We tend to rate algorithms based on run-time (as opposed to space complexity). Some computers are faster than others, and some computers perform certain operations faster than others. "flops": multiplication/addition operation: ax + b Trying to minimize the number of flops and use that as a basis for comparing algorithms is no longer in favor. The analysis is too complicated and too mathematical to be practical. O-notation. Theta-notation. Omega-notation. O (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0} "for all n>= n0" "after a certain point" "0 <= f(n) <= cg(n)" means that f(n) is less than or equal to some multiple of g(n)." g(n) = n. O (g(n)) = { n/2 , 10n , 2n - 5, 2n + 1000000, 2n + 1000000 < 3n when? when n > 1000000, 2n + 1000000 < 3n, so 2n + 1000000 < 3n "after a certain point". The practical upshot of this is that if you compare algorithms, you only have to look at the slowest terms of the runtime functions and compare those. AND, you also get to drop off the constant coefficients of these slowest terms. So, if f(n) = 2n^2+3n+5 and g(n) = n^3 + 12n^2 + 7. f(n) = O(g(n)) n^2 BAD THINGS. Eliminating constant coefficients suggests that two algorithms run at about the same speed, when they could be quite different in practice. n^2 10000n^2 are not really the same speed. But we do this because the constant coefficient is not easy to compute and depends on the underlying chip of the computer itself. Focusing our analysis on what happens for very large inputs may distract us from the inputs our program will actually use. GOOD THINGS. Analysis is easier because we are not mired in detailed mathematics. "I have counted the steps and have determined them to be 3n^2 + 6n + 7, so it is O (n^2)." <== DON'T DO THIS. O (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0} OMEGA (inverted horseshoe) (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= cg(n) <= f(n) for all n >= n0} THETA (g(n)) = {f(n): there exist positive constants c1, c2, and n0 such that 0 <= c1 g(n) <= f(n) <= c2 g(n) for all n >= n0} Roughly speaking, f(n) = O (g(n)) means that f(n) <= g(n), f (n) = OMEGA (g(n)) means that f(n) >= g(n) f(n) = THETA (g(n)) means that f(n) = g(n) (up to a constant multiple). f (n) = o (g(n)) means that f(n) < g(n) g (n) = omega (g(n)) means that f(n) > g(n). Informally, (you can come up with crazy functions for which this isn't true): lim n->inf f(n)/g(n) = 0, then f(n) = o (g(n)). lim n->inf f(n)/g(n) = inf, then f(n) = omega ( g(n)). lim n->inf f(n)/g(n) = k, 0 < k < inf, then f(n) = THETA (g(n)). If f(n) is both O(g(n)) and OMEGA (g(n)), then it is THETA (g(n)). When you deal with runtime analysis, there are some certain assumptions you can make (that might not be true with mathematics in general!) 1. Runtime is never negative. 2. GENERALLY assume that runtime functions are monotonically increasing. (They might remain constant, but they'll never decrease). Practical examples: SORTING THETA (n lg n ) THETA (n^2) ??? Merge Bubble Shell O (n lg^2 n) Quick (ave case) Selection Tree Shaker Heap (but the constant is high) Insertion