Analyzing Algorithms: o, omega, O, OMEGA, THETA People talk about n^2 algorithms, or n lg n algorithms, n, lg n POE SORT ________ done = false while (!done) { will run n!/2 times put elements in array in random order [n time] check whether array is sorted [ n time] if array is sorted, done = true } What is the asymptotic average case runtime of this algorithm? THETA (n!) Hint: We're dealing with algorithms so slow that our standards of analysis become too silly to be relevant. n+1 = ___ (n) o : < O : <= omega : > OMEGA : >= THETA : == lim{n->inf) (n+1)/n = 1 Since 0 < 1 < inf, what is the symbol we should use? THETA n+1 = THETA (n) If the limit falls between 0 and inf, then it's THETA. If the limit is 0, then it's o. If the limit is inf, then it's omega. 10,000,000n^2 = ___ (n^2) : THETA lim[n->inf] 10,000,000n^2 / n^2 = 10,000,000, which is between 0 and inf. 0.000000000000000001 n^2 = ____ (n) : omega lim[n->inf] 0.000000000000000001 n^2 / n = 0.000000000000000001 n -> inf (n+1000)^2 = ___ (n^2) : THETA (n+1000)^2 / (n^2) = ((n+1000)/n)^2 = (1 + 1000/n) ^2. As n -> inf, 1000/n -> 0. Our limit is 1, so THETA. (n+googol)^15 = ___ (n^15) : THETA (n+1)! = ___ (n!) : omega (n+1)! / n! = [(n+1) n!] / n! = n+1 -> inf. POE SORT runs in n n! time. This is omega (n!). In fact POE SORT runs in (n+1)! time which is significantly slower than n! time. Classifying algorithms with THETA notation is grouping algorithms together with a broad brush. But in some situations, this broad brush is still not broad enough. One of these situations is the ridiculously slow algorithms. So some classify algorithms as being either n^k, where k a nonnegative real number (n^0, is constant time, n^1 is linear time, etc.) with a class called EXPONENTITAL for those algorithms such as POE SORT which are slower than ALL n^k. Under this system, n lg n, would fall under n, because n lg n is faster than n^k, k>1. n lg n is faster than n^1.00000001. Under this system, adding factors of lg n, does not change the analysis of the algorithm. n, n lg n, n lg^2 n, n lg^10 n, would still fall in the same category as each other, since all of these are faster than n^1.00000001 . Another system says that the above system is a little too broad, not distinguishing between the lg factors. So, another system says we classify algorithms using n^k lg^j (n). In other words, classify algorithms by the power of n AND by the power of the log. (Also, with an EXPONENTIAL class for the slow stuff.)