int sz = 100; #define sz 100 cout << "Enter the size of the array: "; int sz; cin >> sz; //double temps[sz]; double * temps = new double[sz]; Allocating a two-dimensional array in C++ (r rows and c columns): double ** twodarray = new double*[r]; for (int i=0; i < r; i++) twodarray[i] = new double[c]; In Java, it would be double[][] twodarray = new double[r][c]; In C++, you would have to deallocate as well: for (int i=0; i < r; i++) delete[] twodarray[i]; delete twodarray; In C, int i; double ** twodarray = malloc (r*sizeof(double*)); for (i=0; i < r; i++) twodarray[i] = malloc (c*sizeof(double)); ... for (i=0; i < r; i++) free (twodarray[i]); free (twodarray); Algorithms: What is an algorithm. The study of algorithms predates the study of computer science. Al Gore-ithm. Al-Khourisme (<== so approximate as to be wrong. Al-Qurisma?) (Al-Jabr) <== He is the reason Americans hate Arabs. (JOKE) An algorithm is a detailed, unambiguous description of how to solve a certain problem. What are famous algorithms? Long Division. The classic multiplication algorithm. Right to left, stagger the partial products. "Classic" algorithm for square roots and cube roots. Even older ones. Using multiplication with pebbles. The Greek word for stone is calculus. Now in modern times, what are classic algorithms we've learned in computer science classes. QuickSort. BubbleSort. Binary Search. Linear Search. "Sorting" is not an algorithm. Why not? Not a well-defined procedure. Sorting may be a "problem." But in and of itself, it is not an algorithm because it does in and of itself describe a clear cut way to do it. Algorithms: BubbleSort, QuickSort, MergeSort, InsertionSort, ShellSort, ShakerSort, TreeSort, HeapSort. QuickSort, MergeSort: very fast, but why didn't I learn these algorithms when I first studied programming? Because these algorithms are naturally recursive, and when I started programming (???) the language EVERYONE was using was: BASIC, which is not a recursive language. TreeSort: almost as fast but not recursive. (In fact, TreeSort makes the same of comparisons as Merge Sort, but not in the same order. This number is "almost" the mathematical minimum. BubbleSort: only advantage is that it easy to learn and code for beginning students. Similar with Shaker and Insertion. HeapSort: fast, but not as fast as others. You can do this in place in a simple array. No need for pointers or recursion. TreeSort also doesn't need recursion or pointers BUT it needs several copies of the array (extra storage space). When we categorize these sorts: n lg n: QuickSort, TreeSort, HeapSort, MergeSort n^2: BubbleSort,InsertionSort,ShakerSort,SelectionSort n lg^2 n: ShellSort Big-Oh notation is a VERY CRUDE way to analyze an algorithm. We do the analysis in a crude way because the universe doesn't allow us to be precise. Different pieces of hardware run differently. In fact, some very sophisticated have the compiler check what the underlying hardware is to compile different methods optimized differently for different equipment. I was told that if I can develop an algorithm that solves a problem without taking the square root, use that, because square roots are notoriously slow. Actually, some machines have chip-level square root. Way back I took a class on Matrix Algorithms. And we analyzed our algorithm in terms of "flops". z = ax+y. A multiplication-addition combination. A later edition of the textbook redefined flop as being a+x OR ax. Multiplication and division are going to take longer than addition and subtraction. Should we really count them the same? Answer: yes, because counting it accurately is too difficult and doesn't yield much in the way of useful information. Nowadays, we measure algorithms in terms of "asymptotic running time." (Big-Oh notation.). The running time of Algorithm A is f(n), n being the size of the data being input to A if lim[n->inf] [Runtime A on n] / f(n) = k, where 0 < k < inf. (Implied assumption that algorithms take longer when you give them more data.) BubbleSort runs in n^2 time, this means that if the data is of size n, as n gets larger, BubbleSort's runtime divided by n^2 comes close to some specific constant. (The constant will probably be in the vicinity of 0.5). 69! is just a shade under 10^100. It is the largest factorial that can be entered on a hand calculator with two digits of exponential notation. As a result, hand calculators were often speed tested by entering 69! to see how long it took. So, calculator manufacturer just programmed in that value explicitly. Asymptotic running time is NOT the only consideration for algorithm design, but it is the BASIC beginning to algorithm analysis. When we compute asymptotic running time, there are two basic ways to do this. Worst-case running time, and average-case running time. We tend to do worst-case running time because it is easier analysis. BubbleSort, in worst case, takes n^2 time. Average case is harder because what does even mean? We assume that all possible data of size n is equally likely. (Is that even a fair assumption?) And we analyze according to that, and the math is harder. QuickSort: Average time n lg n. Worst time: n^2.