P = NP? Sometimes called the most important unsolved problem in theoretical computer science. Still unsolved, but the work that has gone into solving it has led to some interesting results, even if the basic question is still unsolved. Most Computer Scientists believe that P != NP. A few believe that P = NP. And there are also a few who suspect that this might be a Godel problem. (A statement that is true in all models but for which there is no proof.) 6 3 10 5 16 8 4 2 1 If P = NP, my opinion is that the classic NP-complete problems will be solved in polynomial time, but with an exponent that large enough that no practical advantage will be gained from it. Even if the exponent is as small as 10, it will be very difficult to construct polynomial time algorithms, but the exponent might be 100 or 1000 or a googol. The notion of NP-completeness was to provide information on P=NP. Using NP-completeness alone, one might prove P=NP, but you won't prove P != NP. Since proving P=NP using the notion of NP-completeness ought to be fairly easy, yet has never been done, is "evidence" that perhaps P != NP. The idea is that if ANY NP-complete problem can be solved in polynomial time, then P = NP, and with hundreds of different problems out there encompassing pretty much all of computer science, so far, none have been solved in polynomial time. NP-completeness applies only to decision problems: the answer is YES or NO. An NP-complete problem satisfies the following criteria. 1. It's in NP. 2. Every problem in NP is polynomial-time reducible to the given problem. A can be reduced to B IF every valid input to A can be translated to a valid input for B in polynomial time such A's answer on A's input matches B's answer on B's input. So, if I were to find a polynomial-time solution for ANY NP-Complete problem, I could then take ANY problem in NP, reduce it my newly-solved problem (which takes polynomial time) and then solve it (again in polynomial time) giving a total polynomial-time solution for that problem in NP that I'm looking at. Thus P = NP, in the event that I solve even one of the NP-complete problems in polynomial time. How do you prove that ALL problems in NP reduce to the problem you're looking at. Well, you only have to do that once. Once you have your first NP-Complete problem, then all you have to do is reduce that one to a new problem to prove that the new problem is NP-Complete. Everything reduces to A. I prove that A reduces to B. Therefore, I can reduce everything to B, using as an intermediate step!! If I know that A is NP-Complete, I can prove that B is as well, by reducing A to B. Reducing B to A gives me nothing. I already know I can reduce B to A since I already know that A is NP-Complete. Sadly, reducing B to A is frequently much easier than reducing A to B. Prove that A is equivalent to B. Well, first prove that whenever A is true, B is true. Then prove that whenever B is false, A is false. (Ha ha, this doesn't work.) The "first" NP-Complete problem, Satisfiability was proven NP-Complete by Cook, but it is called Cook's Theorem. Satisfiability: Given a boolean expression with ands ors and nots is it possible to determine whether there is a truth assignment for the expression that will make the whole thing true. So like (P & Q & R) or (P & ~Q & S) or (Q & ~S & ~T) or whatever. And, in a nutshell, what Cook did to prove that EVERY problem in NP reduces to it is take the encoding of the Turing Machine for that problem (and we know there is an encoding; we used it to develop the universal Turing Machine), and equated that Turing Machine to a Satisfiability problem based on the ways you can get to a YES state, the states you can visit to get there. Once you have SAT then you can use this to prove other problems NP-Complete, and, from there, still more problems, and so on. ORACLES This is the most famous way for constructing machines more powerful than a Turing Machine. Of course, most oracle machines (OTM's) are not realizable in reality, or at least no one has figured out how to do it. You can think of an Oracle as a black box with an input tape and an output tape. When information is coded onto the input tape, and the Oracle is run, the output is written by the Oracle onto the output tape. Other than the reading and the writing of the tape, the Oracle appears to take no time to solve the problem, and the solution is always correct. One of the most famous oracles is one that on the input tape is a coded Turing Machine and input for that Turing Machine, and the output is YES or NO, whether the Machine halts on that input. Since this cannot be done with a Turing Machine, a Turing Machine that has this oracle can now solve a whole class of problems that a regular TM cannot. So, some oracles allow you to solve a certain class of problems that otherwise could not be solved. However, the oracles we will be talking about don't actually add computing power to the TM, but they will allow problems to "possibly" be solved faster. And "possibly" is in quotes, because P = NP, then these problems could be solved "quickly" anyway. So, this gives us the Polynomial Hierarchy, sometimes called the Polynomial-Time Hierarchy. It should be noted that EVERY class that is part of the polynomial hierarchy is a subclass of PSPACE. PSPACE is the class of problems that can be solved using a polynomial amount of space on a TM. (Only writes to a polynomial number of squares on the tape, based on the input length). If a problem is in P, it must also be in PSPACE. If the programs in polynomial time, it has to only access a polynomial number of squares on the tape. However, a problem can be in PSPACE without being in P. (A slow program that reads and writes the same squares over and over.) P = PSPACE? is another unsolved problem in Computer Science, but not considered nearly as important as P = NP?.