SHELL SORT We have an array of n items. We choose some subset of n-1...2 and do a series of Insertion Sorts on those elements which are exactly k spaces apart where k, consecutively, is each number in our subset. Finally we do an Insertion Sort on the whole thing. While there is some debate as to what the appropriate subset ought to be, for our purposes, our subset will contain precisely those numbers who have no prime factor other than 2 or 3. So, if we are sorting 100 items. We will do Insertion Sorts on: 96, 81, 72, 64, 48, 36, 32, 27, 24, 18, 16, 12, 9, 8, 6, 4, 3, 2 and then finally do an Insertion Sort on the whole thing. The reason people don't like this method, is that there an awful lot of numbers with no prime factors larger than 3. There about lg^2 (n) such numbers less than n. But the reason this is cool is that if you these intermediate sorts AT NO TIME will an element have to be shifted more than one spot per Insertion Sort. (There is no guarantee of this in general.) This means we have a hard upper limit of n lg^2 (n) for the running time of Shell Sort. Absolutely worst case. No weird bad examples that would throw us off. 0 1 2 3 4 5 6 7 8 A S D F G H J K L 9 items Which elements, six apart from each other are out of order? A K D F G H J S L Which elements, four apart from each other are out of order? A H D F G K J S L Which elements, three apart from each other are out of order? A G D F H K J S L Which elements, two apart from each other are out of order? A F D G H J K S L Which elements are out of order? A D F G H J K L S Notice that each level, nothing was out of order by more than one position. This translates very well to parallel. At any level, you can check all the pairs simultaneously because no pair ever impacts on any other pair. At each level, with enough threads, each level can be run in constant time. Since there are lg^2 (n) levels, the whole thing runs, in parallel, in lg^2 (n) time. Merge Sort also runs in lg^2 (n) time, but the cool thing about Shell Sort is that it can be used as a sorting network. DEADLOCK DEADLOCK is when the program cannot terminate because at least one thread is waiting for a lock that ain't never going to be released. This is not the same thing as infinite loop. An infinite loop involves the CPU. You have a loop that's running forever and consuming resources. while (b==7) is going to be repeatedly checking whether b is 7, and this uses the CPU. But deadlocks really don't. They just sort of freeze there. THE DINING PHILOSOPHERS N philosophers each with a plate of spaghetti. These philosophers are unconcerned with germs. There are N forks. The philosophers sit around a round table. Between every pair of adjacent philosophers there is a fork. Between every pair of adjacent forks, there is a philosopher. The philosophers each need two forks to eat their spaghetti, but they have to share. Each philosopher is programmed to grab the right fork, grab the left fork, eat for a finite period of time, put the forks down, think for a finite period of time, and so forth. This is a deadlock problem. Because if all philosophers grab the right fork, then none of them can grab the left fork because another philosopher already has it. Now, the classic solution to the Dining Philosophers problem is to make one (you don't need more than one) philsopher left-handed. One philosopher reaches for the left fork first. All the others reach for the right fork first. And that's enough to prevent deadlock. In general, this sort of deadlock, caused by threads holding more than one lock. The classic solution: give each lock a distinct priority number. And go by the convention that you choose not grab a lock if you already have a lower priority lock. If you really need that higher priority lock, first release the lower lock, grab the higher lock, and then grab the lower lock.