Jump to content

Computability theory

From Wikipedia, the free encyclopedia
(Redirected from Computability Theory)

Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.

Basic questions addressed by computability theory include:

  • What does it mean for a function on the natural numbers to be computable?
  • How can noncomputable functions be classified into a hierarchy based on their level of noncomputability?

Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. The study of which mathematical constructions can be effectively performed is sometimes called recursive mathematics.[a]

Introduction

[edit]

n 2 3 4 5 6 7 ...
Σ(n) 4 6 13 4098 > 3.5×1018267 > 1010101018705353 ?
Does not appear The Busy Beaver function Σ(n) grows faster than any computable function.
Hence, it is not computable;[2] only a few values are known.

Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post.[3][b]

The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis"[4]: 300  and "Turing's thesis".[4]: 376  Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis:[5]: 84 

"Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute notion to an interesting epistemological notion, i.e., one not depending on the formalism chosen."[5]: 84 [6]

With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church[7][8] and Turing[9] were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the Entscheidungsproblem is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false.

Many problems in mathematics have been shown to be undecidable after these initial examples were established.[c] In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers.

Turing computability

[edit]

The main form of computability studied in the field was introduced by Turing in 1936.[9] A set of natural numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the set. A function f from natural numbers to natural numbers is a (Turing) computable, or recursive function if there is a Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator.

The terminology for computable functions and sets is not completely standardized. The definition in terms of μ-recursive functions as well as a different definition of rekursiv functions by Gödel led to the traditional name recursive for sets and functions computable by a Turing machine. The word decidable stems from the German word Entscheidungsproblem which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Nigel J. Cutland,[10] it is a partial recursive function (which can be undefined for some inputs), while according to Robert I. Soare[11] it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. In 1996, Soare[12] gave additional comments about the terminology.

Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but according to the Cantor's theorem, there are uncountably many sets of natural numbers.

Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a computably enumerable (c.e.) set, which is a set that can be enumerated by a Turing machine (other terms for computably enumerable include recursively enumerable and semidecidable). Equivalently, a set is c.e. if and only if it is the range of some computable function. The c.e. sets, although not decidable in general, have been studied in detail in computability theory.

Areas of research

[edit]

Beginning with the theory of computable sets and functions described above, the field of computability theory has grown to include the study of many closely related topics. These are not independent areas of research: each of these areas draws ideas and results from the others, and most computability theorists are familiar with the majority of them.

Relative computability and the Turing degrees

[edit]

Computability theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing computability defined using oracle Turing machines, introduced by Turing in 1939.[13] An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form "Is n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable. Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an oracle cannot.

Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively) computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives a precise measure of how uncomputable the set is.

The natural examples of sets that are not computable, including many different sets that encode variants of the halting problem, have two properties in common:

  1. They are computably enumerable, and
  2. Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or m-equivalent).

Many-one reductions are "stronger" than Turing reductions: if a set A is many-one reducible to a set B, then A is Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets are all many-one equivalent, it is possible to construct computably enumerable sets A and B such that A is Turing reducible to B but not many-one reducible to B. It can be shown that every computably enumerable set is many-one reducible to the halting problem, and thus the halting problem is the most complicated computably enumerable set with respect to many-one reducibility and with respect to Turing reducibility. In 1944, Post[14] asked whether every computably enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is no computably enumerable set with a Turing degree intermediate between those two.

As intermediate results, Post defined natural types of computably enumerable sets like the simple, hypersimple and hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of computably enumerable sets of intermediate Turing degree; this problem became known as Post's problem. After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure.

There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute a function f which dominates every computable function g in the sense that there is a constant c depending on g such that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets.

The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the Turing jump of another set. Post's theorem establishes a close relationship between the Turing jump operation and the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability in arithmetic.

Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set of Turing degrees containing computably enumerable sets. A deep theorem of Shore and Slaman[15] states that the function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees. A survey by Ambos-Spies and Fejer[16] gives an overview of this research and its historical progression.

Other reducibilities

[edit]

An ongoing area of research in computability theory studies reducibility relations other than Turing reducibility. Post[14] introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with. Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one example.

The strong reducibilities include:

One-one reducibility: A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f such that each n is in A if and only if f(n) is in B.
Many-one reducibility: This is essentially one-one reducibility without the constraint that f be injective. A is many-one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only if f(n) is in B.
Truth-table reducibility: A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied.

Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed in the article Reduction (computability theory).

The major research on strong reducibilities has been to compare their theories, both for the class of all computably enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is the union of infinitely many truth-table degrees.

Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These reducibilities are closely connected to definability over the standard model of arithmetic.

Rice's theorem and the arithmetical hierarchy

[edit]

Rice showed that for every nontrivial class C (which contains some but not all c.e. sets) the index set E = {e: the eth c.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that is, can be mapped using a many-one reduction to E (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σn+1 contains just all sets which are computably enumerable relative to Σn; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets.

Reverse mathematics

[edit]

The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was studied in detail by Stephen Simpson and others; in 1999, Simpson[17] gave a detailed discussion of the program. The set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive comprehension, which states that the powerset of the naturals is closed under Turing reducibility.

Numberings

[edit]

A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in the numbering on the input x. Numberings can be partial-computable although some of its members are total computable functions. Admissible numberings are those into which all others can be translated. A Friedberg numbering (named after its discoverer) is a one-one numbering of all partial-computable functions; it is necessarily not an admissible numbering. Later research dealt also with numberings of other classes like classes of computably enumerable sets. Goncharov discovered for example a class of computably enumerable sets for which the numberings fall into exactly two classes with respect to computable isomorphisms.

The priority method

[edit]

Post's problem was solved with a method called the priority method; a proof using this method is called a priority argument. This method is primarily used to construct computably enumerable sets with particular properties. To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties. Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event.

Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity.[11] Because complex priority arguments can be technical and difficult to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method.

The lattice of computably enumerable sets

[edit]

When Post defined the notion of a simple set as a c.e. set with an infinite complement not containing any infinite c.e. set, he started to study the structure of the computably enumerable sets under inclusion. This lattice became a well-studied structure. Computable sets can be defined in this structure by the basic result that a set is computable if and only if the set and its complement are both computably enumerable. Infinite c.e. sets have always infinite computable subsets; but on the other hand, simple sets exist but do not always have a coinfinite computable superset. Post[14] introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are c.e. sets such that every c.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; in 1991, Harrington and Soare[18] found eventually such a property.

Automorphism problems

[edit]

Another important question is the existence of automorphisms in computability-theoretic structures. One of these structures is that one of computably enumerable sets under inclusion modulo finite difference; in this structure, A is below B if and only if the set difference B − A is finite. Maximal sets (as defined in the previous paragraph) have the property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the computably enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. In 1974, Soare[19] showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative sets, the sets which are many-one equivalent to the halting problem.

Besides the lattice of computably enumerable sets, automorphisms are also studied for the structure of the Turing degrees of all sets as well as for the structure of the Turing degrees of c.e. sets. In both cases, Cooper claims to have constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not been verified and some colleagues believe that the construction contains errors and that the question of whether there is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area.[20][16]

Kolmogorov complexity

[edit]

The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area.[d]

Frequency computation

[edit]

This branch of computability theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions A is it possible to compute for any different n inputs x1x2, ..., xn a tuple of n numbers y1, y2, ..., yn such that at least m of the equations A(xk) = yk are true. Such sets are known as (mn)-recursive sets. The first major result in this branch of computability theory is Trakhtenbrot's result that a set is computable if it is (mn)-recursive for some mn with 2m > n. On the other hand, Jockusch's semirecursive sets (which were already known informally before Jockusch introduced them 1968) are examples of a set which is (mn)-recursive if and only if 2m < n + 1. There are uncountably many of these sets and also some computably enumerable but noncomputable sets of this type. Later, Degtev established a hierarchy of computably enumerable sets that are (1, n + 1)-recursive but not (1, n)-recursive. After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One of the major results was Kummer's Cardinality Theory which states that a set A is computable if and only if there is an n such that some algorithm enumerates for each tuple of n different numbers up to n many possible choices of the cardinality of this set of n numbers intersected with A; these choices must contain the true cardinality but leave out at least one false one.

Inductive inference

[edit]

This is the computability-theoretic branch of learning theory. It is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, computable functional) which outputs for any input of the form (f(0), f(1), ..., f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all computably enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards.

Generalizations of Turing computability

[edit]

Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks in 1990.[21] These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory.

Continuous computability theory

[edit]

Computability theory for digital computation is well developed. Computability theory is less well developed for analog computation that occurs in analog computers, analog signal processing, analog electronics, artificial neural networks and continuous-time control theory, modelled by differential equations and continuous dynamical systems.[22][23] For example, models of computation such as the Blum–Shub–Smale machine model have formalized computation on the reals.

Relationships between definability, proof and computability

[edit]

There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by Post's theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and incompleteness theorems. Gödel's proofs show that the set of logical consequences of an effective first-order theory is a computably enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski's indefinability theorem can be interpreted both in terms of definability and in terms of computability.

Computability theory is also linked to second-order arithmetic, a formal theory of natural numbers and sets of natural numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be defined in weak subsystems of second-order arithmetic. The program of reverse mathematics uses these subsystems to measure the non-computability inherent in well known mathematical theorems. In 1999, Simpson[17] discussed many aspects of second-order arithmetic and reverse mathematics.

The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these weak systems is by characterizing which computable functions the system can prove to be total.[24] For example, in primitive recursive arithmetic any computable function that is provably total is actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an example of such a function is provided by Goodstein's theorem.

Name

[edit]

The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed[12] that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology.[e] These researchers also use terminology such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow[25] and Simpson.[26] Some commentators argue that both the names recursion theory and computability theory fail to convey the fact that most of the objects studied in computability theory are not computable.[27]

In 1967, Rogers[28] has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest in computability theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the natural numbers.

Professional organizations

[edit]

The main professional organization for computability theory is the Association for Symbolic Logic, which holds several research conferences each year. The interdisciplinary research Association Computability in Europe (CiE) also organizes a series of annual conferences.

See also

[edit]

Notes

[edit]
  1. ^ The Handbook of Recursive Mathematics[1] covers many of the known results in this field.
  2. ^ Many of these foundational papers are collected in The Undecidable (1965) edited by Martin Davis
  3. ^ The list of undecidable problems gives additional examples.
  4. ^ A list of open problems is maintained by Joseph Miller and André Nies, the André Nies's homepage has it published.
  5. ^ MathSciNet searches for the titles like "computably enumerable" and "c.e." show that many papers have been published with this terminology as well as with the other one.

References

[edit]
  1. ^ Ershov, Yury Leonidovich; Goncharov, Sergei Savostyanovich [at Wikidata]; Nerode, Anil; Remmel, Jeffrey B. (1998). Handbook of Recursive Mathematics. North-Holland. ISBN 0-7204-2285-X.
  2. ^ Radó, Tibor (May 1962). "On non-computable functions". Bell System Technical Journal. 41 (3): 877–884. doi:10.1002/j.1538-7305.1962.tb00480.x.
  3. ^ Soare, Robert Irving (2011-12-22). "Computability Theory and Applications: The Art of Classical Computability" (PDF). Department of Mathematics. University of Chicago. Archived (PDF) from the original on 2022-06-30. Retrieved 2017-08-23.
  4. ^ a b Kleene, Stephen Cole (1952). Introduction to Metamathematics. North-Holland. pp. 300, 376. ISBN 0-7204-2103-9.
  5. ^ a b Davis, Martin, ed. (2004) [1965]. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions. Dover Publications, Inc. p. 84. ISBN 978-0-486-43228-1. p. 84: Kurt Gödel (1946): Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute notion to an interesting epistemological notion, i.e., one not depending on the formalism chosen.
  6. ^ Gödel, Kurt (1990). "[Gödel (1946)]". In Feferman, Solomon; et al. (eds.). Kurt Gödel Publications 1938–1974 Volume II. Vol. II. New York, USA: Oxford University Press. pp. 144ff. ISBN 978-0-19-514721-6. p. 150: To be more precise: a function of integers is computable in any formal system containing arithmetic if and only if it is computable in arithmetic, where a function f is called computable in S if there is in S a computable term representing f. (NB. This volume also includes the 1946 paper by Kurt Gödel (with commentary by Charles Parsons at pp. 144ff.). This 1990 edition has the cited footnote added by Gödel on p. 150 (which had also been added to Gödel's reprint in Davis' 1965 compilation).)
  7. ^ Church, Alonzo (1936a). "An unsolvable problem of elementary number theory". American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. JSTOR 2371045. Reprinted in Davis 1965.
  8. ^ Church, Alonzo (1936b). "A note on the Entscheidungsproblem". Journal of Symbolic Logic. 1 (1): 40–41. doi:10.2307/2269326. JSTOR 2269326. S2CID 42323521. Reprinted in Davis 1965.
  9. ^ a b Turing, Alan Mathison (1937) [1936]. "On computable numbers, with an application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2. 42 (1): 230–265. doi:10.1112/plms/s2-42.1.230. S2CID 73712. Turing, Alan Mathison (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction" (PDF). Proceedings of the London Mathematical Society. 2. 43 (1): 544–546. doi:10.1112/plms/s2-43.6.544. Archived (PDF) from the original on 2022-07-18. Retrieved 2022-08-08. Reprinted in Davis 1965.
  10. ^ Cutland, Nigel J. (1980). Computability, An introduction to recursive function theory. Cambridge University Press. ISBN 0-521-29465-7.
  11. ^ a b Soare, Robert Irving (1987). Recursively Enumerable Sets and Degrees. Perspectives in Mathematical Logic. Springer-Verlag. ISBN 0-387-15299-7.
  12. ^ a b Soare, Robert Irving (1996). "Computability and recursion" (PDF). Bulletin of Symbolic Logic. 2 (3): 284–321. doi:10.2307/420992. JSTOR 420992. S2CID 5894394.
  13. ^ Turing, Alan Mathison (1939). "Systems of logic based on ordinals". Proceedings of the London Mathematical Society. 2. 45 (1): 161–228. doi:10.1112/plms/s2-45.1.161. hdl:21.11116/0000-0001-91CE-3. Reprinted in Davis 1965.
  14. ^ a b c Post, Emil Leon (1944). "Recursively enumerable sets of positive integers and their decision problems". Bulletin of the American Mathematical Society. 50 (5): 284–316. doi:10.1090/S0002-9904-1944-08111-1. MR 0010514. Reprinted in Davis 1965.
  15. ^ Shore, Richard Arnold; Slaman, Theodore Allen (1999). "Defining the Turing Jump". Mathematical Research Letters. 6 (6): 711–722. doi:10.4310/mrl.1999.v6.n6.a10. ISSN 1073-2780. MR 1739227.
  16. ^ a b Ambos-Spies, Klaus; Fejer, Peter A. (2014). "Degrees of unsolvability" (PDF). In Siekmann, Jörg H. (ed.). Computational Logic. Handbook of the History of Logic. Vol. 9. Amsterdam: Elsevier/North-Holland. pp. 443–494. doi:10.1016/B978-0-444-51624-4.50010-1. ISBN 978-0-444-51624-4. MR 3362163. Archived from the original (PDF) on 2013-04-20.
  17. ^ a b Simpson, Steven George (1999). Subsystems of Second Order Arithmetic. Springer-Verlag. ISBN 3-540-64882-8.
  18. ^ Harrington, Leo Anthony; Soare, Robert Irving (1991). "Post's Program and incomplete recursively enumerable sets". Proceedings of the National Academy of Sciences USA. 88 (22): 10242–10246. Bibcode:1991PNAS...8810242H. doi:10.1073/pnas.88.22.10242. PMC 52904. PMID 11607241.
  19. ^ Soare, Robert Irving (1974). "Automorphisms of the lattice of recursively enumerable sets, Part I: Maximal sets". Annals of Mathematics. 100 (1): 80–120. doi:10.2307/1970842. JSTOR 1970842.
  20. ^ Slaman, Theodore Allen; Woodin, William Hugh (1986). "Definability in the Turing degrees". Illinois Journal of Mathematics. 30 (2): 320–334. doi:10.1215/ijm/1256044641. MR 0840131.
  21. ^ Sacks, Gerald Enoch (1990). Higher Recursion Theory. Springer-Verlag. ISBN 3-540-19305-7.
  22. ^ Orponen, Pekka (1997). "A Survey of Continuous-Time Computation Theory". Advances in Algorithms, Languages, and Complexity. pp. 209–224. CiteSeerX 10.1.1.53.1991. doi:10.1007/978-1-4613-3394-4_11. ISBN 978-1-4613-3396-8.
  23. ^ Moore, Cris (1996). "Recursion theory on the reals and continuous-time computation". Theoretical Computer Science. 162 (1): 23–44. CiteSeerX 10.1.1.6.5519. doi:10.1016/0304-3975(95)00248-0.
  24. ^ Fairtlough, Matt; Wainer, Stanley S. (1998). "Hierarchies of Provably Recursive Functions". In Buss, Samuel R. (ed.). Handbook of Proof Theory. Elsevier. pp. 149–208. ISBN 978-0-08-053318-6.
  25. ^ Fortnow, Lance Jeremy (2004-02-15). "Is it Recursive, Computable or Decidable?". Archived from the original on 2022-08-07. Retrieved 2018-03-22.
  26. ^ Simpson, Stephen George (1998-08-24). "What is computability theory?". FOM email list. Archived from the original on 2021-12-18. Retrieved 2006-01-09.
  27. ^ Friedman, Harvey (1998-08-28). "Renaming recursion theory". FOM email list. Archived from the original on 2022-03-01. Retrieved 2006-01-09.
  28. ^ Rogers, Hartley Jr. (1987). The Theory of Recursive Functions and Effective Computability (2nd ed.). MIT Press. ISBN 0-262-68052-1.

Further reading

[edit]
Undergraduate level texts
Advanced texts
Survey papers and collections
Research papers and collections
[edit]