Before anyone gets too excited, I would caution that there there is a long history of failed attempts at this problem, and an equally long history of premature announcements. On the surface, here is what is encouraging about this one:
- The author is a theory professional and not an amateur or a crank
- The author frames this as a possible proof where he's seeking feedback on the method, rather than declaring a solution up front
- The author builds on existing theory in a formerly mainstream approach that has been shown to have problems (circuit lower bounds)
- Looks like a serious and well-considered proof effort
Here is why HN should hold off on any excitement:
- This style of proof was found to be unworkable in the past. In particular, approaches to proving circuit lower bounds in this manner were shown by Razborov and Rudich to be "natural proofs" in the 1990s, which is a category of proof strategy that cannot demonstrate P/NP. It's not explained why this strategy does not succumb to the same problem -- though the author surely has some reasoning around it. (see point 6 of Scott Aaronson's "8 signs a proposed P/NP proof is wrong": http://www.scottaaronson.com/blog/?p=458. You might also see this comment from /r/math: http://www.reddit.com/r/compsci/comments/14mqqt/anyone_have_...)
).
- There is a very good community framework for evaluating proposed solutions when the author's ready. Possibly the author has already submitted the work to a journal, but it's not clear that he's confident enough yet to invite more active discussion on his effort from the theory community. I can't find any discussion of it online. It's likely he's pursuing these on his own, and conceivably flaws have been found since the summer.
- The author does not declare this as a completed solution in the same way that Deolikar did two years ago. In that case, his announcement led to an analysis by the theory community online that led to discovery of core errors. We can wait here 'til the author gives a bit more information.
- This proof joins dozens of other announced solutions to P/NP in both directions (see, for example, http://www.win.tue.nl/~gwoegi/P-versus-NP.htm) which emerge quite regularly, and close attention paid to each will leave no time for original research :)
hi AK. maybe consider blogging about this? this is a well written but superficial analysis. scott aaronson insists on his blog that a proof should explain why it succeeds against "known barriers" eg razborov/rudich Natural Proofs. but this is really an optional requirement of a proof. moreover the actual barrier to "natural proofs" is very subtle and basically insists that a proof, if it exists, should have a certain "intrinsic complexity" in its constructions, and that many such constructions in the literature for class separations do not have this "intrinsic complexity". but new researchers are just suggesting that this only requires some new "intrinsic complexity" function that hasnt been seen before; but that such functions do exist, they just dont seem to be used in proofs that we know of. eminent authorities in the field such as Lipton have argued that the near 20yr old Natural Proofs may be overdramatized as a real barrier.
"Nevertheless, it is my personal opinion that the optimistic approach is the right one; that is, the Razborov–Rudich result should be regarded as a hint, and not a barrier, to separating complex- ity classes. The only real barrier is our lack of imagination."
I may have come across as insistent on clearly stating ways around the natural proofs barrier because this is Hacker News, where folks are liable to jump to conclusions, and not, say, http://cstheory.stackexchange.com. Given the large number of attempts against P vs. NP, frequently by folks who are not active in complexity theory, it's often better to try to understand these ideas on theory sites.
You make an reasonable point, vis-a-vis Chow; rather than being regarded as an absolute barrier for lines of reasoning involving circuit complexity, the requirement of being a sufficiently complex proof may provide a jumping-off point for uncovering a working method.
"its not clear that he's confident enough yet to invite more active discussion on his effort from the theory community". that is highly debatable; arguably that was his intent in creating the public blog and putting all the papers fully accessable on it-- to engage the online community, which somewhat regrettably hasnt happened much in the ~5 months its been online-- that site seems to be "undiscovered" until recently. furthermore, the author insists in his latest blog entry on aug 26th and in recent comments (within the last few days) that he sees no flaws in the proof & welcomes further feedback.
here chow elaborates further/in depth on the "natural" condition in the Razborov Rudich proof & finds some evidence for "nearby" functions that potentially could be used to defy the Razborov-Rudich natural proofs barrier:
"By definition, a natural combinatorial property satisfies two conditions, constructivity and largeness. Our main re- sult is that if the largeness condition is weakened slightly, then not only does the Razborov–Rudich proof break down, but such “almost-natural” (and useful) properties provably exist."
Context: Some problems are easy, and known to be easy. Some problems appear to be hard, and yet easy to verify alleged solutions. That seems odd - let's have a closer look.
Example:
Consider being given a network and asked if there's a path that visits every node exactly once and returns to its starting point. (This is the task of asking if a Hamilton Cycle exists). If the network is big enough, deciding if this can be done can be really hard. However, checking if someone has successfully done it is easy.
Motivation for a definition of NP:
A problem for which it is efficient (for which read "polynomial") to check a purported solution is said to be in "NP". Knowing whether a Hamilton cycle exists in a network is one of many, many such problems.
Definition: A problem is in "NP" if there is a polynomial time algorithm to test an alleged solution.
Definition: A problem is in "P" if there is a polynomial time algorithm to solve it.
Lemma: All problems in "P" are also in "NP". Proof left to the interested reader.
(Note: technically we are dealing with Decision Problems - problems where the answer is always Yes or No. For example, I didn't ask you to find a Hamilton Cycle, I only asked if there was one. Sometimes the Decision version can be ramped up into actually finding the solution.)
Note that easy problems (for example, finding a cycle that visits every edge exactly once - an Euler Cycle) is also "NP", but it's not a hard NP problem - it's actually in "P". This point confuses a lot of people. NP does not automatically mean hard - that comes in a minute.
Another example:
Similarly, factoring a large integer can be really hard, but if someone claims to have a factor, checking it is as easy as doing the division.
Comparing problems:
Cleverly, it can be shown that if you can solve the Hamilton Circuit problem efficiently, that can be converted into a way of solving the factoring problem. In some very real sense, the Hamilton Circuit problem is harder than (or equal to) the factoring problem.
Finding:
There are many, many NP problems that have been shown all to be equivalent to each other, all to be easy to check solutions, and yet no algorithm for solving in polynomial time known. Such problems are call NP-Complete (NPC) because an efficient algorithm for any of them will completely solve all the NP problems.
So:
* A problem is said to be in "P" if there is an algorithm that takes instances and produces answers in polynomial time.
* A problem is said to be in "NP" if there is an algorithm that takes only polynomial time to check a purported solution for correctness.
* It is unknown if there are any problems that are in "NP", but not in "P".
That is the P vs NP question, and the Clay Mathematics Institute has offered $10^6 for its resolution. You could do this by showing that there is a polynomial time algorithm for any of the NPC problems, or by showing that no such algorithm exists.
Summary:
* P problems are "easy"
* NPC problems are thought to be hard
* There are lots of NPC problems, and solving any one of them would solve all NP problems.
* Integer factoring (IF) is thought to be outside P, but not as hard as NPC.
* There are problems that are harder than all NPC problems, these are called NP-Hard.
So we think that
P < IF < NPC={ Clique, 3SAT, G3C, Knapsack, Ham, ...} <= NP-Hard
But we don't know if these are, in fact, all different. The submitted link is to a paper that says yes, they are all different.
OK, a little extra, and speaking very informally ...
I've skimmed the first couple of pages, and in essence he seems to be setting up to show that a circuit that's sufficiently complex to solve the clique problem will have to have an exponential number of components. That would imply that any algorithm to solve "Clique" would be exponential, hence super-polynomial, hence not in "P".
Problem is, others have proven that there are technical barriers to this approach, and I see no hint of the author explicitly saying why his approach avoids these problems.
For more about this concern, see points 1, 4, and 6 in this very accessible post:
The PowerPoint talk "Has There Been Progress on the P vs. NP Question?" lunk to at the end of that post (before the comments) is a very good, accessible overview and is an excellent starting point to highlight what you need to follow up on.
Wikipedia's section on why the proof of P=NP or P!=NP is likely to be hard is also useful reading:
After reading this response and then reading your profile all I would like to say is thank you for contributing to HN. I certainly appreciate it, your karma count leads me to believe many others appreciate it, and I hope you don't let the vocal minority successfully disuade you from it.
Why do they call it decision problems, if it's really about finding a solution that is much more than a boolean (like a path, ...)?
The whole idea of "verify" versus "find" makes no sense for problems with a yes/no answer, because if you can verify a boolean answer in polynomial time, you can always find the solution in polynomial time too, after all there are only two possible solutions so you need to verify only one to find the answer.
If you need extra data to prove this "yes" in polynomial time, then the answer isn't a boolean anymore, but a boolean plus the extra data, so how can you still call that a yes/no problem?
...if you can verify a boolean answer in polynomial time, you can always find the solution in polynomial time too, after all there are only two possible solutions so you need to verify only one to find the answer.
It doesn't work that way. The common characteristic of NP-complete problems is that there are an exponential number of possible solutions, but verifying that a purported solution is a solution takes only polynomial time.
For example consider SAT (see http://en.wikipedia.org/wiki/Boolean_satisfiability_problem for more). This is the problem of deciding whether there is a set of possible variable values that make a given Boolean equation evaluate to True. For an equation of length n there are in general O(n) variables, and therefore 2^O(n) possible sets of true/false values that the variables can have. So a brute force search takes exponential time.
But if you give me an equation, AND a set of values for the variables that you claim will result in the final expression evaluating to True, I can verify it in time O(n).
"If you need extra data to prove this "yes" in polynomial time, then the answer isn't a boolean anymore, but a boolean plus the extra data, so how can you still call that a yes/no problem?"
You are combining two distinct concepts: the answer and the 'evidence'. The answer to an NP-complete problem is always Yes or No. But, we say that we can "verify" a decision problem in polynomial time if, given an answer AND evidence, we can check that the answer is true.
For concreteness: the k-clique problem (determining if there's a clique of at least size k in a graph) is hard. But, if somebody told us the answer was Yes, and gave us appropriate evidence (say, a set of k nodes in the graph which form a clique), verifying would be easy.
Just to give a little more context, because I think this gripe is very common:
There exists a proof that NAND gates are all you ever need to compute an arbitrary boolean function from M bits to N bits. Actually, you also need a "copy" gate -- the ability to generate two bits with value X from one bit with value X -- and "wires" to communicate values from gate to gate, but that's about it.
The proof starts out like this: "a function from M bits to N bits is actually just N functions from M bits to 1 bit, as long as I can copy the input N times and feed it to the N functions which compute each output bit. So if I solve the problem for M-to-1, I automatically solve M-to-N as long as I have COPY and SWAP gates."
It's very important to see that the exact same approach is being used in the P-vs-NP question. P and NP are designed to be about functions which take M bits to 1 bit, hence they are "decision problems" with a boolean output but a variable input.
The question "P = NP?" can also be phrased more common-sensically, if not totally precisely, as "is brute force ever really necessary?". What is brute force? It is if you have to try a substantial fraction of the 2^M inputs to find a certain 1-bit output. What does this look like here? Well, that gets more complicated because we use scaling of a problem and a system of solution to discuss what's going on.
So a problem is in NP if (1) you can reduce it to some sort of M-bit input to 1-bit output, and (2) there exists some constant number K such that if I double the input size from M to 2M, the circuit has no more than K times the number of gates. This gives the logic circuit a "polynomial-time" size.
Then a problem is in P if we can find an input which makes the output 1 in a polynomial number of gates as well. Thus this is the "inverse" problem, "find me an input which makes this function 1." In fact we've just defined what's called a "satisfiability problem" -- find an input which satisfies this polynomial-size M-variable boolean formula.
A problem is in NP-complete if the recursion holds: if you can implement COPY and NAND and wires in that system in a polynomial number of gates, so that any problem in NP can be written in terms of your problem. This is part of the value of having these "constant K" constraints rather than specifying a given K; we get to define these NP-complete cases as the "hardest cases in NP," because if you can reverse them, then you can reverse any other problem in NP.
The decision problem is formulated something like "Is there a hamiltonian path of at most length n" for different n. You can get the actual answer by polling the algorithm that answers the decision problem using a binary search plugging in different values for n (since there is a maximum number of possible paths through a graph with a certain number of nodes)
Just to add: NP actually stands for "non-deterministic polynomial," meaning that if we had non-deterministic computers (like the NFAs you may remember from Automata Theory that can explore several different possibilities simultaneously) those types of computers could solve problems in NP in polynomial time.
hi all. meant to post a comment but didnt understand this hackernews interface so far, am brand new to this site. fukuyama states on his web page he's worked on P vs NP for over 10 yrs, both inside and outside of his professional jobs which include research and teaching. the web page is a proof [claimed/attempt] that P!=NP posted on Jul 1. unfortunately its gotten very little to no online attention since then, at this point so far. he doesnt seem to have announced it anywhere in cyberspace, only created the blog.
Although I find it interesting that he works for Toyota:
> My name is Jun Fukuyama. I’m currently a researcher at Toyota InfoTechnology Center. I’m visiting WINLAB and Civil Engineering department at Rutgers University, working on algorithms and mathematical analysis related to vehicular communications and mobility modeling.
Unless he's been doing this research in his spare time, it seems like an odd fit for Toyota to dedicate research to. Obviously large companies employ theory-oriented researchers, but when I think Toyota I don't really think about very abstract computer science. I'm curious if there are any particular applications for the company one way or the other.
I go to UChicago; that place is sort of like an eerie mystery box that nobody ever goes to or talks about. I didn't even find out about it's existence until my 3rd year here. My one friend who's been in there described the experience as really bizarre; it left him half convinced that its actually a front for some sort of Israeli money laundering operation.
They have some very prominent machine learning folks there and I went to the Machine Learning Summer School they hosted a few years back. Seemed legitimate enough to me!
I wish I had the mathematical background to comprehend this. It's very involved. My understanding is that he has shown that CLIQUE (a known NP-complete problem) takes exponential time and thus is not solvable in polytime, this would mean there is a problem outside of P but inside of NP, and this P != NP.
If a problem is NP complete then there is an algorithm to solve it in nondeterminstic polynomial time. Any nondet algorithm can be transformed to an exp poly time algorithm (just run through each of your poly time algorithms one by one) so showing CLIQUE has an exp algorithm doesn't help at all. We want a lower bound.
No, he is claiming that l-CLIQUE requires exponential circuit size (this is the reason that most of the proof talks about circuit complexity and not time bounded Turing Machines). This is not necessarily the same thing as exponential time.
Can we get a laymans explanation of what this is about? I understand what p np is, but don't really understand what this person is claiming. Is this some kind of claim about the overall concept of p np? Or proving a specific case?
If I understand it right he showed that Clique cannot be solved in polynomial time, since any NP complete problem is at least as hard as Clique none of those problems can be solved in polynomial time. So he used Clique to draw general conclusion that P!=NP.
This is a specific case that (maybe) proves the general claim that P != NP.
Proving an inequality is "easy", because only requires a single counterexample: if there is one problem in NP that is not in P (I.e. an NP problem not solvable in polynomial time), then NP can't possible equal P.
Fukuyama is proposing that CLIQUE is such a counter-example.
(Proving the inequality doesn't actually require the special property of NP-completeness which others are talking about; that is only useable in a proof P = NP.)
> Proving an inequality is "easy", because only requires a single counterexample:
Proving an equality isn't exactly "harder" (no NP-hard pun intended) - all you have to do is show that a single NP-complete problem can be reduced to a problem within P in polynomial time.
I just finished a section on NPC problems in my algorithms class (working on a masters). For years, I've been confused about the true meaning of P, NP, and NPC. Feels weird saying it, but I am so glad I've finally got a good grasp on at least the definitions. One of the most gratifying moments in the class was doing homework problems that required proving various problems were in NPC.
Note: For the moment, I will assume that the NP != NC and above results are correct enough to consider the validity of the latter portion of the paper.
As far as I can tell (from my initial reading), the author (and many many commenters here) seem(s) to be confusing polynomial time (P) with polynomial circuit size (P/poly). These are completely different complexity classes in style and probably in power, too. Last time I checked, there was not a good characterization of P in terms of circuit complexity (the attempt at doing so is P/poly). So, logically, comparing P and NP is not even possible without additional knowledge about P. The exception to this being showing that X != NP where P <= X <= NP.
For reference: P <= BPP <= NP
We know that BPP (a randomized version of P) is (non-strictly) contained in P/poly. We also know that proving P = BPP (which is conjectured to be true by most) requires these classes require superpolynomial lower bounds for Boolean|Arithmetic circuit size. As far as I know, proving an exponential lower bound for an NP-complete problem doesn't immediately rule that P != NP since P !=> only polynomial size circuit size (the circuit size only relates to the size of the advice function). P might not contain problems that require exponential circuit size, but this fact is not stated, proven, nor reference by the author here.
Simply put: The author assumes that P != NP follows immediately from Theorem 6.1. This is not as obvious as they might think it is; many additional details are needed.
Admittedly, I have not rigorously studied or concluded anything from the flattening process the author proposes. I honestly don't believe I have to. Skimming over it, I don't see any way that P is characterized in terms of polynomial circuit sizes (which probably is an assumption as powerful as P!=NP) here. I also do not see any mention of P/poly, which I believe the author is attempting to use.
For a paper approaching the P vs NP issue using circuit complexity, I would expect this much. Because of the lack of even a mention of BPP, P/poly, and others, I would be highly surprised if many of these results in the paper hold at all. Though, hopefully, I might be wrong.
EDIT: Accidentally used circuit depth when I was really thinking of size.
"as far as I know, proving an exponential lower bound for an NP complete problem doesnt immediately rule that P!=NP". that statement is FALSE. P!=NP is exactly a consequence of proving exponential lower time bounds. actually P!=NP would be a consequence of even proving somewhat weaker superpolynomial time bounds on any NP complete problem...
P/Poly is NOT polynomial depth circuits. it is you that is mistaken about this complexity class characterization. it is polynomial SIZE circuits. other statements of your post are incorrect. P !=NP does in fact follow as an immediate, direct consequence from P/Poly (ie poly size nonuniform circuits) != NP
The depth vs. size characterization was a legitimate mistake of mine (got distracted by the authors use of depth). I am aware of the distinction. Though in writing my post, I did actually have size in mind. I'll edit to reflect this.
As for (P/poly != NP) => (P != NP), I have never heard of this. I have, however, seen that if NP is not a subset of P/poly then P != NP (which was not claimed here). Perhaps I am missing a stronger result? If so, please link me to a source for such a result.
thats what I meant. was writing P/poly != NP to mean "NP is not a subset of P/poly". or do we have to worry about the case where P/poly is a superset of NP? have never even considered that, have no idea what it would imply, think it might be impossible....?
We do have to worry about it. If it is the case that NP <= P/poly, then PH collapses to sigma P 2. Unless there have been recent results that show otherwise, P/poly might even contain NEXPTIME.
Regardless, the connection to P in this paper is not immediately obvious to me. It certainly is not explicitedly stated in a form that I even partially recognize.
Because of these reasons, I am deeply skeptical of the leap directly to P != NP. A lot of incorrect papers do something similar: They show a lot of very complicated constructions then, BAM, Collorary. P != NP. It kind of hand waves the most important step.
there are indeed dozens of incorrect papers which make neophyte errors. however, this one was written by a TCS Phd [from what I can tell] with a respectable number of published papers, some in TCS. I have studied this particular area of attack for many years, ie monotone circuit theory, & think it is one of the most plausible lines of attack on P vs NP, hence my playing devils advocate for this proof so far. even if there are holes, which frankly, is the most probable case, it may either prove something new about monotone circuit theory, and/or raise awareness/interest in this particular direction of attack.
would like to see the opinion of experts who know about monotone circuit theory, which is a deep area with very many substantial results & think there are few who are really qualified to judge these results. even Phds in complexity theory may not be very familiar with monotone circuit proofs.... they are some of the most complex, arcane/abstruse, but even award-winning proofs in TCS.... (hence some of their plausibility in attack).... it seems possible to me the author has avoided all the basic traps and that only some other experts in the area can find the holes after careful study....
and note in the paper that there is a big difference between assuming statements that cannot be proven, and outright false statements. the author may actually have some pieces of the proof that cannot be proven as he thinks, but are not known to be false, and these can be turned into new substantial conjectures! that is the process of mathematical development....
Not sure if I'm labouring under basic misconception here (or if you're trolling). First, you note that BPP <= P/poly. Secondly, you note that it is hard to prove P = BPP, but this is not necessary, all we need is the trivial P <= BPP. Then P <= P/poly < NP (if the author is correct).
2. I never said that P = BPP was hard to prove. I only brought up BPP because its derandomization, despite being limited to subexponential nondeterministic time, would still imply a lower bound of super polynomial circuit size (which could be the exponential lower bound presented by the author). The author is using this exponential lower bound (circuits) to immediately conclude that P != NP:
"The proof of Theorem 6.1 is now complete. We have:
Corollary 6.5 P 6= NP"
have not dug into the paper, but basically if it can be shown that NP cannot be computed with Poly-size circuits, then P!=NP, ie a weaker consequence of the stronger NP!=P/Poly. that is one of the basic conjectures of circuit theory, a stronger statement than P!=NP. the author may be talking about P-size circuits without mentioning the P/Poly class by name-- that would show some unfamiliarity with standard complexity theory, but is not a huge crime, and it is conceivably not necessary to actually refer to P/Poly in circuit proof referring to that class, although of course it would be better... P/Poly was originally defined w.r.t an Oracle, and arguably the equivalent characterization of that same class as "P-size nonuniform circuits" is actually much simpler & intuitive & natural....
I'm interested in knowing how many of you on here are going to read this in one sitting. Not that it would be sufficient to analyze it for bugs, but because you're that interested.
If I had the mathematical background necessary to understand it, I certainly would.
... and I really wish I did have the mathematical background. Anyone have any recommendations on how to go from a standard CS undergrad math background to being able to understand proofs like these?
Sipser is great. Highly recommended. Working that book really accelerated my understanding. You can go from practically nothing to something with it, though it helps to have some discrete math.
As a compiler guy, I found that reading a lot of type theory papers helped me really get into the CS theoretical mindset, which made reading mathematics papers a lot easier. Writing a lot of Haskell code didn’t hurt. Oleg Kiselyov has some fairly approachable papers, as do Simon Peyton-Jones and Daan Leijen. Try searching for things on Lambda the Ultimate[1] if you’re interested.
Read it and google as you go. Math is one of the best covered topics online. For a potential millenium prize proof, it seems very approachable. That said, that's not saying much.
I find math in some cases to be really hard to Google. Particularly with expressions that I don't know the names of; I never know what to use as search terms.
I'm certainly missing some background but if I had the time I'd work my way through it just for the hell of it. The problem he attempts solving here is extremely important so I welcome any attempts made at it.
There is a great udacity course with all the background to the P/NP problem: http://www.udacity.com/overview/Course/cs313/
I found it very informative. It also has a section on "should you try solving the P vs. NP problem" and the answer is "not unless you have a lot of time, money, and though of something no one tried before, or is a math / CS professor"
now the funny thing is that there is really a typo on that page, so all of the these hacker news people downvote just because I pointed out some small, but truth parth of reality.
Well, sure. It's more that you posted a comment not that there's a typo but that this typo makes it so unbearable to read that you were simply forced to stop. But anyway...
it will be oil on fire, and you guys will already downvoted me into oblivion, but the truth is the same, the paper has several typos, which makes in unquestionable that it did not see a spell-checker.
Spell checking math, and in particular LaTeX math, is a complete nightmare. I, for one, find that there are just as many substantive typographical errors after spell checking my papers as before, and all I've done is wasted at a minimum an hour trawling through false positives.
Besides, small typos in a paper that's substantially readable and well crafted, written by someone in a language other than their first, are scarcely a reason to stop reading.
- The author is a theory professional and not an amateur or a crank
- The author frames this as a possible proof where he's seeking feedback on the method, rather than declaring a solution up front
- The author builds on existing theory in a formerly mainstream approach that has been shown to have problems (circuit lower bounds)
- Looks like a serious and well-considered proof effort
Here is why HN should hold off on any excitement:
- This style of proof was found to be unworkable in the past. In particular, approaches to proving circuit lower bounds in this manner were shown by Razborov and Rudich to be "natural proofs" in the 1990s, which is a category of proof strategy that cannot demonstrate P/NP. It's not explained why this strategy does not succumb to the same problem -- though the author surely has some reasoning around it. (see point 6 of Scott Aaronson's "8 signs a proposed P/NP proof is wrong": http://www.scottaaronson.com/blog/?p=458. You might also see this comment from /r/math: http://www.reddit.com/r/compsci/comments/14mqqt/anyone_have_...) ).
- There is a very good community framework for evaluating proposed solutions when the author's ready. Possibly the author has already submitted the work to a journal, but it's not clear that he's confident enough yet to invite more active discussion on his effort from the theory community. I can't find any discussion of it online. It's likely he's pursuing these on his own, and conceivably flaws have been found since the summer.
- The author does not declare this as a completed solution in the same way that Deolikar did two years ago. In that case, his announcement led to an analysis by the theory community online that led to discovery of core errors. We can wait here 'til the author gives a bit more information.
- This proof joins dozens of other announced solutions to P/NP in both directions (see, for example, http://www.win.tue.nl/~gwoegi/P-versus-NP.htm) which emerge quite regularly, and close attention paid to each will leave no time for original research :)