Jump to content

P (complexity)

From Wikipedia, the free encyclopedia
(Redirected from P (Complexity))

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or "tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb.

Definition

[edit]

A language L is in P if and only if there exists a deterministic Turing machine M, such that

  • M runs for polynomial time on all inputs
  • For all x in L, M outputs 1
  • For all x not in L, M outputs 0

P can also be viewed as a uniform family of Boolean circuits. A language L is in P if and only if there exists a polynomial-time uniform family of Boolean circuits , such that

  • For all , takes n bits as input and outputs 1 bit
  • For all x in L,
  • For all x not in L,

The circuit definition can be weakened to use only a logspace uniform family without changing the complexity class.

Notable problems in P

[edit]

P is known to contain many natural problems, including the decision versions of linear programming, and finding a maximum matching. In 2002, it was shown that the problem of determining if a number is prime is in P.[1] The related class of function problems is FP.

Several natural problems are complete for P, including st-connectivity (or reachability) on alternating graphs.[2] The article on P-complete problems lists further relevant problems in P.

Relationships to other classes

[edit]
A representation of the relation among complexity classes
Inclusions of complexity classes including P, NP, co-NP, BPP, P/poly, PH, and PSPACE

A generalization of P is NP, which is the class of decision problems decidable by a non-deterministic Turing machine that runs in polynomial time. Equivalently, it is the class of decision problems where each "yes" instance has a polynomial size certificate, and certificates can be checked by a polynomial time deterministic Turing machine. The class of problems for which this is true for the "no" instances is called co-NP. P is trivially a subset of NP and of co-NP; most experts believe it is a proper subset,[3] although this belief (the hypothesis) remains unproven. Another open problem is whether NP = co-NP; since P = co-P,[4] a negative answer would imply .

P is also known to be at least as large as L, the class of problems decidable in a logarithmic amount of memory space. A decider using space cannot use more than time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory by alternating Turing machines. P is also known to be no larger than PSPACE, the class of problems decidable in polynomial space. Again, whether P = PSPACE is an open problem. To summarize:

Here, EXPTIME is the class of problems solvable in exponential time. Of all the classes shown above, only two strict containments are known:

  • P is strictly contained in EXPTIME. Consequently, all EXPTIME-hard problems lie outside P, and at least one of the containments to the right of P above is strict (in fact, it is widely believed that all three are strict).
  • L is strictly contained in PSPACE.

The most difficult problems in P are P-complete problems.

Another generalization of P is P/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that an advice string is given that depends only on the length of the input. Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical problems, including all of BPP. If it contains NP, then the polynomial hierarchy collapses to the second level. On the other hand, it also contains some impractical problems, including some undecidable problems such as the unary version of any undecidable problem.

In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Mitsunori Ogihara, showed that if there exists a sparse language that is P-complete, then L = P.[5]

Diagram of randomised complexity classes
P in relation to probabilistic complexity classes (ZPP, RP, co-RP, BPP, BQP, PP), all within PSPACE. It is unknown if any of these containments are strict.

P is contained in BQP; it is unknown whether this containment is strict.

Properties

[edit]

Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function that is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P is low for itself. This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such as random access, that can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine.

Languages in P are also closed under reversal, intersection, union, concatenation, Kleene closure, inverse homomorphism, and complementation.[6]

Pure existence proofs of polynomial-time algorithms

[edit]

Some problems are known to be solvable in polynomial time, but no concrete algorithm is known for solving them. For example, the Robertson–Seymour theorem guarantees that there is a finite list of forbidden minors that characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determining whether a graph has a given graph as a minor. This yields a nonconstructive proof that there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem.

Alternative characterizations

[edit]

In descriptive complexity, P can be described as the problems expressible in FO(LFP), the first-order logic with a least fixed point operator added to it, on ordered structures. In Immerman's 1999 textbook on descriptive complexity,[7] Immerman ascribes this result to Vardi[8] and to Immerman.[9]

It was published in 2001 that PTIME corresponds to (positive) range concatenation grammars.[10]

P can also be defined as an algorithmic complexity class for problems that are not decision problems[11] (even though, for example, finding the solution to a 2-satisfiability instance in polynomial time automatically gives a polynomial algorithm for the corresponding decision problem). In that case P is not a subset of NP, but P∩DEC is, where DEC is the class of decision problems.

History

[edit]

Kozen[12] states that Cobham and Edmonds are "generally credited with the invention of the notion of polynomial time," though Rabin also invented the notion independently and around the same time (Rabin's paper[13] was in a 1967 proceedings of a 1966 conference, while Cobham's[14] was in a 1965 proceedings of a 1964 conference and Edmonds's[15] was published in a journal in 1965, though Rabin makes no mention of either and was apparently unaware of them). Cobham invented the class as a robust way of characterizing efficient algorithms, leading to Cobham's thesis. However, H. C. Pocklington, in a 1910 paper,[16][17] analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that ran in (moderately) exponential time.

Notes

[edit]
  1. ^ Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P", Annals of Mathematics 160 (2004), no. 2, pp. 781–793.
  2. ^ Immerman, Neil (1999). Descriptive Complexity. New York: Springer-Verlag. ISBN 978-0-387-98600-5.
  3. ^ Johnsonbaugh, Richard F.; Schaefer, Marcus (2004). Algorithms. Pearson Education. p. 458. ISBN 0-02-360692-4.
  4. ^ "complexity theory - Why is co-P = P". Stack Overflow. Archived from the original on 14 October 2020. Retrieved 2020-10-14.
  5. ^ Cai, Jin-Yi; Sivakumar, D. (April 1999). "Sparse Hard Sets for P: Resolution of a Conjecture of Hartmanis". Journal of Computer and System Sciences. 58 (2): 280–296. doi:10.1006/jcss.1998.1615.
  6. ^ Hopcroft, John E.; Rajeev Motwani; Jeffrey D. Ullman (2001). Introduction to automata theory, languages, and computation (2. ed.). Boston: Addison-Wesley. pp. 425–426. ISBN 978-0201441246.
  7. ^ Immerman, Neil (1999). Descriptive Complexity. New York: Springer-Verlag. p. 66. ISBN 978-0-387-98600-5.
  8. ^ Vardi, Moshe Y. (1982). "The Complexity of Relational Query Languages". STOC '82: Proceedings of the fourteenth annual ACM symposium on Theory of computing. pp. 137–146. doi:10.1145/800070.802186.
  9. ^ Immerman, Neil (1982). "Relational Queries Computable in Polynomial Time". STOC '82: Proceedings of the fourteenth annual ACM symposium on Theory of computing. pp. 147–152. doi:10.1145/800070.802187. Revised version in Information and Control, 68 (1986), 86–104.
  10. ^ Laura Kallmeyer (2010). Parsing Beyond Context-Free Grammars. Springer Science & Business Media. pp. 5 and 37. ISBN 978-3-642-14846-0. citing http://mjn.host.cs.st-andrews.ac.uk/publications/2001d.pdf for the proof
  11. ^ Wegener, Ingo (2005). Complexity Theory. Springer-Verlag. p. 35. doi:10.1007/3-540-27477-4. ISBN 978-3-540-21045-0.
  12. ^ Kozen, Dexter C. (2006). Theory of Computation. Springer. p. 4. ISBN 978-1-84628-297-3.
  13. ^ Rabin, Michael O. (1967). "Mathematical theory of automata". Mathematical Aspects of Computer Science. Proc. Sympos. Appl. Math., Vol. XIX. Amer. Math. Soc. pp. 153–175.
  14. ^ Cobham, Alan (1965). "The intrinsic computational difficulty of functions". Logic, Methodology and Philos. Sci. (Proc. 1964 Internat. Congr.). pp. 24–30.
  15. ^ Edmonds, J. (1965). "Paths, trees, and flowers". Canadian J. Math. 17: 449–467. doi:10.4153/CJM-1965-045-4.
  16. ^ Pocklington, H. C. (1910–1912). "The determination of the exponent to which a number belongs, the practical solution of certain congruences, and the law of quadratic reciprocity". Proc. Camb. Phil. Soc. 16: 1–5.
  17. ^ Gautschi, Walter (1994). Mathematics of computation, 1943–1993: a half-century of computational mathematics: Mathematics of Computation 50th Anniversary Symposium, August 9–13, 1993, Vancouver, British Columbia. Providence, RI: American Mathematical Society. pp. 503–504. ISBN 978-0-8218-0291-5.

References

[edit]
[edit]