Talk:Second-order logic
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||
|
Kinds of second-order logic
[edit]The article should record the widespread existence of logics called second-order, since Leon Henkin, which are not like the intractable theory described in the article:
- Henkin-style second order logic admits complete axiomatisation, and is usually axiomatised in a language with support for lambda-abstractions; this theory can be expressed in first-order logic;
- The intractable theory is what "set theory in sheep's clothing" that cannot be expressed in FOL.
I've no time to tackle this now, but hope to get around to it before too long. --- Charles Stewart 20:18, 22 July 2005 (UTC)
second order logic and russell's paradox
[edit]the article claims that restricting to first order logic was one of the ways they eliminated Russell's paradox. I am wondering if second order logics are more likely to contain Russell's paradox than a first order logic (which can also contain Russell's paradox). -Lethe | Talk 20:56, August 7, 2005 (UTC)
- Russell's paradox applies to neither first-order nor second-order logic, indeed, both these logics are consistent. As stated in the article, it applies to Frege's system, which is a kind of "mixed-order" logic. Don't be confused by the fact that it also applies to variants of naive set theory, which can be formulated on top of, say, first-order logic; this is not a problem of the logic, but of the theory. -- EJ 12:04, 18 August 2005 (UTC)
I've noticed this too. It seems that the claim
'After the discovery of Russell's paradox it was realized that something was wrong with his system. Eventually logicians found that restricting Frege's logic in various ways—to what is now called first-order logic—eliminated this problem: sets and properties cannot be quantified over in first-order-logic alone. The now-standard hierarchy of orders of logics dates from this time.'
Is not correct (and is nonetheless unsourced). Would it make sense to modify this. In fact, I'm not sure that Russell's paradox really has anything to do with second order logic at all (apart from the fact that Frege's system happened to be in higher order logic, but this is not a direct connection). Should these claims be removed?81.23.56.53 (talk) 10:39, 21 November 2008 (UTC) If something is incorrect, and no citation is given for it, we could call for a citation for it or edit/delete it giving reasons. --Philogo 13:11, 21 November 2008 (UTC)
Standard second order logic "blocks" the paradox by preventing predicates from occurring in subject position. In a second order logic that allows nominalization of predicates (as achieved, for instance, by Frege's comprehension principle) it's possible to construct a formula expressing the property R that any property X has when it is not a property of itself. Then it can be shown that R is R IFF it is not. Basically, standard second order logic can be seen as introducing a syntactic restriction from second order logic with nominalized predicates precisely to avoid the expressibility of the Russell property. The construction of the Russell property in second order logic with nominalized predicates parallels the construction of the Russell set in naive set theory. It is correct to state that the Russell paradox motivated the study of more restrictive logics. One perspective on the rise of axiomatic set theory would be to say that the restriction to first order logic was motivated by a desire to minimalize the threat of paradox emerging from the underlying logic. However, in order to have any theory at all you still need set abstraction principles, which is basically what the axioms of set theory give you. That is, set abstraction plays basically the role that Frege's multiple correlation thesis (expressed by his comprehension principle) played, with the paradox blocked restrictions explicitly stated in the axioms, as opposed to the syntactic restrictions placed on higher-order logics like standard second order logic.173.30.30.149 (talk) 12:58, 16 April 2009 (UTC)
second order logic and class theory
[edit]Another similar question: in the first order language of axiomatic set theory, one can view classes as abbreviations for first order statements about sets, in which case one may not quantify over
class variables. If one extends the language to allow quantification over classes, is that the same as moving to a second order language about sets? -Lethe | Talk 22:34, August 7, 2005 (UTC)
- Yes (more or less), if you understand "language" in purely syntactic way, as the set of well-formed formulas. No, if you intend to give the second-order quantifiers also their second-order semantics. That means, Gödel-Bernays or Kelley-Morse set theory is a first-order theory, it does not obey the rules of second-order logic. -- EJ 12:19, 18 August 2005 (UTC)
- Nonetheless, the term "second-order" is often used to apply to these sorts of theories, also second-order arithmetic (Z_2) is standard terminology, and the terms second-order and higher-order are regularly used in type theories to refer to first-order-describable logics, eg. Chuch's simple theory of types is often called higher-order logic, and is a little stronger than Z_2. The article really needs to take account of all this. --- Charles Stewart 19:57, 18 August 2005 (UTC)
- Yes, this is an unfortunate bit of terminology which further adds to the confusion. Second-order arithmetic is a two-sorted first-order theory. It will be tricky to incorporate this in the article without confusing everybody. Technically, it does not even belong here, as the article is about second-order logic, which is not involved in these "second-order" theories. But I'm not sure whether having a separate article on, say, second-order theory or second-order language would be helpful. -- EJ 10:14, 19 August 2005 (UTC)
Reverse mathematics
[edit]I fail to understand how reverse mathematics could be relevant to the claim that second order logic is needed for topology etc. what ever that means exactly. After all, reverse mathematics is mostly concerned with fragments of second order arithmetic, which - unfortunately, perhaps - is a multisorted first order theory. Is there something I'm missing in the work of Simpson, Friedman or others that would justify the claim in the article? Aatu 21:45, 13 December 2005 (UTC)
- The claims in that paragraph are very misleading: Simpson and the others use second-order arithmetic, which is a two sorted first-order theory with quantifiers over numbers and sets of numbers, and one could, with not much difficulty but at the price of some syntactic nuisance, recast it is a single-sorted first-order theory, using a relation singsub(X,Y) which is true iff X is a singleton subset of Y. It is not second-order logic in the sense that is debated here. I'll change the paragraph. --- Charles Stewart 16:33, 15 December 2005 (UTC)
terrible sentence
[edit]Hello, someone can understand the meaning of the first paragraph? I think two sentences are missing.
JPLR
Semantics
[edit]I think it would be nice to add a section describing the various semantics. Right now there is a hint about Henkin semantics, but it is very vague. I will write the section soon unless people object here. - CMummert 11:37, 26 April 2006 (UTC)
Ambiguous sentence
[edit]The article currently states the following: "But monadic second-order logic, restricted to one-place predicates, is not only complete and consistent but decidable--that is, a proof of every true theorem is not only possible but determinately accessible by a mechanical method."
As far as I understand it is only the quantification which is limited to one-place predicates in MSO. Otherwise, n-place predicates also occur in MSO (with n>1). Can someone please confirm before thinking of correcting this?
Thanks.
changes needed to beginning part
[edit]The first part of the article should be broken into a short intro paragraph and a new section on the definition. I think that the special semantics of second order logic need to be emphasized more. The following sentence is just not true:
- That is, the following expression: ∃P P(x) is not a sentence of first-order logic.
What is true is (something like): the meaning of the sentence assigned by the semantics of second order logic is not expressible in first order logic. CMummert 15:25, 13 June 2006 (UTC)
A nice article ... but Holy Smokes! Not a single reference?
[edit]To all who have toiled in this soil:
- This is a very nicely-written, smoothly flowing article, but ohmygod where are the references?!! Us lesser mortals need more (lower-level, "first-order" <== writ ironic) information.
wvbaileyWvbailey 21:45, 17 September 2006 (UTC)
- I'm glad you think this article is clear, but I am planning significant changes. Some of these changes are indicated in higher comments of mine on this page. One part of the changes is to add references; I have been looking into which things are appropriate. Any suggestions would be welcome. CMummert 23:38, 17 September 2006 (UTC)
Finiteness and countability
[edit]I think that you should state that the codomain of a function considered is the same as the domain. Leocat 17 Sept 2006
Dumbed down intro?
[edit]I love to see detailed technical articles on wikipedia, but I can't make any sense of the first paragraph, even though I have a functional understanding of first-order logic. It would be nice if there were one or two more generalized and dumbed-down introduction sentences before launching into the highly technical explanation. --24.200.34.209 21:52, 27 October 2006 (UTC)
- Thanks for the feedback. The entire article needs revision, but it has to wait until somebody actually does it. The collection of editors who know about the topic and have a lot of spare time is not so large. I have plans to do it sometime, but I am working on other articles right now. CMummert 22:51, 27 October 2006 (UTC)
- The Stanford Encyclopedia of Philosophy has a much more comprehensible article on the subject. --Brian Josephson (talk) 10:58, 10 October 2014 (UTC)
Claims about monadic second order logic
[edit]I am moving the following from the article to here.
- As George Boolos has pointed out, though, this incompleteness enters only with polyadic second-order logic: logic quantifying over n-place predicates, for n > 1. But monadic second-order logic, restricted to one-place predicates, is not only complete and consistent but decidable--that is, a proof of every true theorem is not only possible but determinately accessible by a mechanical method. In this respect, monadic second-order logic fares better than polyadic first-order logic: monadic second-order logic is complete, consistent and decidable, but polyadic first-order logic, though consistent and complete, is no longer decidable (See halting problem).
The monadic-second-order theory of arithmetic is the same as the full second-order theory, and is not decidable. Boolos probably made a correct claim which has been vaguely reprinted in the paragraph above. Until a reference is found, the claim should be kept off the main page. CMummert 23:55, 21 November 2006 (UTC)
On the "extensions" of second-order logic
[edit]"In mathematical logic, second-order logic is an extension of first-order logic, ... Second-order logic is in turn extended by type theory." Hmm... so second-order logic is an extension of both first-order logic and type theory? first-order logic + type theory = second-order logic ? User:M K Lai 20 December 2006
- Second order logic extends first order logic. Type theory extends second order logic (and thus second order logic is extended by type theory). You may be misreading the article. CMummert 02:11, 21 December 2006 (UTC)
- The meaning of the term "extends' could be better explained. Second-order logics are actually less expressive than FOL and there is nothing in a SOL that can not be expressed in FOL. It is the constraints the SOL provides (both formal and through its representation) that provide the additional expressiveness. See the first chapter (I think, I don't have the book to hand) of John Sowa's book on Knowledge Representation. This allows one th think of FOL as defining a design space and the creation of SOLs as an exploration of that design space. The utility of this is the applicaiton of various search algorithms to exploration of the FOL space to devise new SOLs for various purposes. A SOL is restriction on FOL with a (more or less) expressive symbol system that makes it easier to think through specific problems - I am beigging to repeat myself. Steven Forth 12.151.151.3 18:59, 21 February 2007 (UTC)
- Can you explain in more detail what you mean by "nothing in a SOL that can not be expressed in FOL"? The article explains how various statements, such as the "the universe is countable" or "the continuum hypothesis holds", can be expressed in second-order logic with second-order semantics while they cannot be expressed with the same meaning under first order semantics. CMummert · talk 22:50, 21 February 2007 (UTC)
- Language should be defined in terms of their model-theoretic properties, such as compactness (in many of its various forms), Lowenheim-Skolem properties, interpolation, etc. Given such definitions of languages, it is impossible to e.g. categorically express the natural numbers in a first-order language. While it is true that certain statements such as "the CH holds" are expressible in the language of first-order set theory under one sense of 'express', they are not under another sense; viz., the same sense as regards the categoricity of second-order arithmetic.
- Likewise, if you take a first-order language and extend it with a model-theoretic quantifier "there are infinitely many", then you no longer have a first-order language, just because your variables range only over individuals. If you allow simultaneously quantification over infinitely many variables in a single formula, then again, you no longer have a first-order language, since the model-theory of this language exactly coincides with that of second-order languages. If you partially order first-order quantifiers, you no longer have a first-order language. The moral is that it doesn't matter what your variables range over, there is a precise definition of the order of the language specified in terms of (classes of) models and the properties holding between them. In this sense, it is quite clear that SOL extends FOL. Nortexoid 23:17, 21 February 2007 (UTC)
- Was the last response intended for me or for S. Forth? In the ordinary contemporary usage of the term "language", it is a syntactic rather than semantic concept. Thus there is no difference between the language of second-order arithmetic using first-order semantics and the language of second-order arithmetic using second-order semantics, because the set of syntactically valid formulas is identical for each of these systems. Nevertheless the second order semantics cause the language to be more expressive. CMummert · talk 00:10, 22 February 2007 (UTC)
- It was intended for S. Forth, but regarding your comment, I'm having trouble making sense of it. Do you mean by "FO semantics" "Henkin semantics", and by "SO semantics" "full or standard semantics"? If you check the literature, the vast majority of logicians do not consider languages that do not satisfy a class of model-theoretic properties first-order, even if they have all and only the same grammatical categories as first-order languages. In fact, it seems a rather crude way of categorizing languages according to e.g. whatever the variables range over (e.g. individuals rather than sets or classes). Also, what does "syntactically valid" formulas mean? Do you mean "deducible" ("provable) formulas, or do you mean "deducible formulas that are valid wrt to a certain semantics"? It's a mystery at this point.
- There are various notions of "expressiveness". E.g., there is a sentence in the language of first-order set theory that "expresses" there are denumerably many P in the standard interpretation of set-theory, but it does not "express" there are denumerably many P in the sense that each of the members of its class of models is denumerable, because there are nonstandard models of first-order set theory. Hence, people say that no first-order language can express 'there are infinitely many P', a result of its having the (upward) Lowenheim-Skolem property. Nortexoid 01:05, 22 February 2007 (UTC)
- When I said "syntactically valid formula" I meant the same things as a "well formed formula" - it is a finite sequence of symbols, built up out of atomic formulas using logical connectives and quantifiers. So is syntactically valid and is not. Yes, when I say "second order semantics" I mean full semantics and by first-order semantics I mean Henkin semantics; I thought that would be clear from context. The fact that the language for second-order logic can be used with two different semantics is a perennial cause of confusion. CMummert · talk 01:41, 22 February 2007 (UTC)
re: Expressive power of second order logic
[edit]Paragraph 3 states 'For example, if the domain is the set of all real numbers, one can assert in first-order logic the existence of an additive inverse of each real number by writing ∀x ∃y (x + y = 0) but one needs second-order logic to assert the least-upper-bound property for sets of real numbers, which states that every bounded, nonempty set of real numbers has a supremum.'
The necessity of SOL asserted in paragraphs 3 might be clarified. This specific point has been addressed by Herbert Enderton here: http://sci.tech-archive.net/Archive/sci.logic/2004-10/0224.html —Preceding unsigned comment added by 66.81.75.254 (talk) 21:28, 27 April 2008 (UTC)
Expressive Power
[edit]This section includes a (huge sentence) with the excerpt: "because the real numbers are up to isomorphism the unique ordered field that satisfies this property" even if it is correct grammar or the thing that was meant, shouldn't it just not have "isomorphism" the noun there... Like unless I'm interpreting the concept wrongly, shouldn't it say "are isomorphic to"? I just feel like the noun there doesn't work. Could someone who does know, rephrase or break up the sentence (or confirm that it's correct as is?) 110.175.9.69 (talk) 13:53, 24 June 2015 (UTC)
Power of the Existential Fragment
[edit]Was linking 'Monadic second-order logic' to the article on 'Monadic predicate calculus' which doesn't really discuss *second order* so much, but it does give one an intuition what 'monadic' would mean in the context of SOL; I narrowed the scope of what was linked to remove 'second-order' so that ppl going to the target article wouldn't expect to find SOL discussed there. Zero sharp 04:24, 14 October 2007 (UTC)
Explain formula idea
[edit]Next to complicated formula like this one:
it would be good if there were an "Explain Formula" link that would show and hide a box like this:
This essentially says
For all sets | |
[
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
]. |
What do you think of this idea? —Egriffin (talk) 16:22, 31 January 2008 (UTC)
- The expectation is that the reader will do something like this to break apart the formula. If you think the formula is too long, it would be possible to split it up into pieces. But we generally avoid "popup" things in articles. If something is worth saying, it's worth saying all the time. — Carl (CBM · talk) 18:01, 31 January 2008 (UTC)
- I'm against popups too, but agree tht the formula is quite daunting. I suggest a standard approach used in textbooks: break it down into pieces. So define blah to be upper bound of A, and so then if upper bound exists, then blah. (Do it in formulas, not words), although the if-then implication can be done in words. (professional logicians can read long sentences like this easily enough, and seem to prefer them; however, since this is an intro article, and not a proof, the one-big-long-formula style is inappropriate). linas (talk) 18:39, 31 January 2008 (UTC)
- There is already a nice prose statement in the same paragraph; the point of displaying the formula is precisely to demonstrate that the words can be represented in the language of second-order logic. The formula itself could be broken into pieces, such as "Bounded(X)" to mean X has an upper bound, etc. — Carl (CBM · talk) 22:32, 31 January 2008 (UTC)
- I was suggesting using something like Template:ShowHide or Template:Seemore (not really a popup in that there's no overlapping). Anyway I gather the use of these for article text is "not well liked". —Egriffin (talk) 13:00, 1 February 2008 (UTC)
- I don't like the idea, it would only complicate the matters further. As Carl pointed out, the meaning of the formula is already explained in the prose around. If its complexity is a concern nevertheless, I think a better solution would be to use an example which allows for a simpler formula. For instance, switch from reals to natural numbers, and discuss the induction principle
- or something like that. Surely the lub property is not the only real-world example of a second-order formula there is. -- EJ (talk) 13:18, 1 February 2008 (UTC)
"whereas second-order logic uses variables that range over sets of individuals"
[edit]Should this not be "whereas second-order logic allows predicate variables that range over predicates"--Philogo 21:07, 17 July 2008 (UTC)
- What exactly is meant by "individuals"? Predicates? Objects? There should be a definition or a link. 65.183.135.231 (talk) 02:07, 22 April 2009 (UTC)
- I added a link. — Carl (CBM · talk) 02:24, 22 April 2009 (UTC)
The lub formula is not correct
[edit]The formula does not say that x is an upper bound. It only says there is a number, x, that is smaller or equal to every upper bound. That implies that the set of upper bounds is limited from below, but it still does not say that any particular of the upper bounds is a least upper bound. The structure of the formula should be
For all A,
if A is not empty, and A has an upper bound, then there is an x such that x is an upper bound, and for all y, if y is an upper bound of A. then x <= y
or,
(Here I stick to the convention used earlier, that the scope of a quantifier is limited to the inside of the parenthesis that follows it. I actually prefer the notation over . That is, I prefer a notation where the scope is to the end of the enclosing parenthesis.)
I concur with the discussion above that the example is more complex than it needs to be in the article.
As to the suggestion that such formulas should be explained using a two-column display, I don't think it helps much in understanding the topic of the article. The effect of being able to quantify over more than individuals can and should be grasped independently of the technical issue of learning to read expressions with lots of parentheses.
When such things are expressed in prose, the effect of the parentheses is usually achieved with the help of terminology. Inventing the concept of "upper bound" and using it in a larger formula, effectively wraps all parts of the expression defining "upper bound" in a pair of parentheses.
The purpose of formulas is usually to make something more apparent than it would be with prose. A formula like is more readable than "area equals the product of length and width". However, formulas in formal logic have a different purpose. They are restricted to a limited set of symbols precisely to make it easier to prove properties of the language itself. For this reason, the formulas in formal logic are fare less readable than the prose versions (with higher-level concepts like "upper bound"). To train readers in comprehending large formulas in this restricted language is quite pointless. Cacadril (talk) 11:31, 11 December 2008 (UTC)
- The formula in the article is correct. Read it carefully, especially notice that the last connective is an equivalence, not an implication. — Emil J. 13:25, 11 December 2008 (UTC)
Defining equality
[edit]Equality can be defined by
JRSpriggs (talk) 03:53, 18 June 2009 (UTC)
- With classical, full semantics, sure. But with full semantics there are better things to worry about than equality? — Carl (CBM · talk) 04:51, 18 June 2009 (UTC)
- It has the necessary properties even in intuitionistic logic. What do you mean by "better things to worry about"? JRSpriggs (talk) 05:31, 18 June 2009 (UTC)
- If one is already willing to quantify over every possible predicate, then the existence of particularly simple ones such as equality isn't much to worry about. In the non-classical case I am worried about what = is supposed to mean. Are you suggesting this ought to be in the article? — Carl (CBM · talk) 05:44, 18 June 2009 (UTC)
- Actually, I don't see immediately how to prove x = y from the assumption in intuitionistic logic. — Carl (CBM · talk) 05:56, 18 June 2009 (UTC)
- In the calculus of constructions, equality is defined in terms of higher-order quantification over propositions, just as JRSpriggs said, from which Leibniz's rule can be inferred.
- As an aside, propositional equality in Martin-Loef's type theory allows both Leibniz's rule and the fact that equality is an equivalence relation to be derived from proof-theoretically dual rules governing the introduction and elimination of equality in an intuitionistic predicative type theory (i.e., with a formalisation of quantification over predicates & relations that is weaker than Pi-1-1). You need separate rules defining equality in this system, but there has been quite a bit of work done into means for creating inference rules automatically from definition of type constructors. — Charles Stewart (talk) 08:15, 18 June 2009 (UTC)
- Actually, I don't see immediately how to prove x = y from the assumption in intuitionistic logic. — Carl (CBM · talk) 05:56, 18 June 2009 (UTC)
- Take z[u] to be the property u = x. We have x = x, i.e., z[x], hence z[y], i.e., y = x. — Emil J. 10:19, 18 June 2009 (UTC)
- I was thinking of settings in which the zs are forced to be effective but the equality relation is not effective. Of course there are many things that are called "intuitionistic logic". And of course if one is in a setting where is a predicate for each x, then the implication goes through. — Carl (CBM · talk) 14:22, 18 June 2009 (UTC)
To CBM: One gets because is a theorem. The substitution property of equality is embodied by the definiens. These are the defining attributes of equality. It is also easy to show that it is an equivalence relation. Transitivity follows from the transitivity of implication. Symmetry follows from EmilJ's argument which I have taken the liberty of correcting. Yes, I wanted to include it in the article provided you-all agree. I do not understand why anyone would have reservations about including any specific predicate in the range of a quantifier which supposedly ranges over all predicates. JRSpriggs (talk) 21:04, 18 June 2009 (UTC)
- I don't have any objection to including the fact for classical second-order logic (which is what the article is about in the first place). Although I don't see it as being of great interest; can we give a motivation for why it might be important to someone?
- In intuitionistic higher-order arithmetic (e.g. HAω) things are more complex. First there is the distinction between intensional and extensional equality, and second there is the issue that there are well-known interpretations in which all functionals are continuous, or all functionals are effective. Handling all that would go well outside the scope of this article, which is only about classical logic at the moment. So I would prefer not to make any statement about intuitionistic systems. — Carl (CBM · talk) 21:35, 18 June 2009 (UTC)
- I do not see an article on HAω. Do we have one? JRSpriggs (talk) 08:22, 19 June 2009 (UTC)
- I assume Carl means the extension of HA with the quantifiers over the higher-order functions taken from PRAω, as defined in Gödel's Dialectica interpretation. — Charles Stewart (talk) 11:15, 19 June 2009 (UTC)
- I do not see an article on HAω. Do we have one? JRSpriggs (talk) 08:22, 19 June 2009 (UTC)
Classical type theory and higher-order logic
[edit]Farmed out from my reply beginning "I assume Carl means the extension of HA..." — Charles Stewart (talk) 07:41, 20 June 2009 (UTC)
- Yes, that's exactly what I mean. I do not think we have an article on it, since both our coverage of proof theory and our coverage of constructive mathematics are both limited. I just created Primitive recursive functional last month, but the article on primitive recursive functions dates to 2001. I don't even know what title would be used; Higher-order intuitionistic arithmetic?
- I think that the issue here is that I have never really run into "higher-order logic" in the intuitionistic setting, probably because it's too classical of a notion. But I am more familiar with intuitionistic higher-type arithmetic. So I was using that as an example of an intuitionistic higher-order system, to give myself a reference point. Perhaps it can be argued this is not really "intuitionistic higher-order logic." In any case, since this article is about classical logic, too much digression to the constructive side isn't going to pay off. — Carl (CBM · talk) 12:58, 19 June 2009 (UTC)
- Isn't higher-order (beyond 3) logic pretty much a synonym for type theory? The gulf between classical and intuitionistic approaches doesn't seem so wide to me. We know that the intuitionistic system F is the right theory of functions to extend the Dialectica interpretation to second-order arithmetic, a fact which is missing from that article. Or are we talking about the set-theory-in-sheep's-clothing semantics, where I guess there is a dramatic chasm? — Charles Stewart (talk) 15:47, 19 June 2009 (UTC)
- In classical, full semantics, the definition given by JRSpriggs above goes beyond defining an "equality-like" relation, and actually defines the real equality relation. But full semantics (the ones that encompass a large amount of set theory) are required for that. This is exactly the with normal models discussed at first-order logic: if there are not enough predicates available then one may be unable to distinguish elements that are not truly equal.
- The main difference I think of between type theory and higher-order logic is that type theory (at least if one is only using pure types) is based on functionals instead of predicates. The important difference between classical and intuitionistic systems in this setting is that classical systems are essentially always studied with extensional equality, but intuitionistic systems are often studied with intensional equality.
- Here is my concern. Suppose I use intensional equality in intuitionistic type theory, and add axioms that make all functionals respect extensionality: e.g. for all F,f,g, if f and g are extensionally equal then F(f) and F(g) are extensionally equal. This system is satisfied by the ω-model whose higher-type elements are names for primitive recursive functionals. I do not see how one can define intensional equality in such a system. But this may not be the kind of system that was originally intended. — Carl (CBM · talk) 16:33, 19 June 2009 (UTC)
- Ah, this discussion pleases me: my doctoral dissertation gives a type-theoretic account of first-order classical arithmetic based on intensional equality. An observation made variously by Tait and Coquand is that constructive inhabitation (and by extension, the possibility of non-degenerate intensional equality) can exist with either the axiom of choice, or PEM, but not both.
- JRSpriggs' definition is in the second-order language, so it doesn't care about the semantics. You are quite right that the Henkin semantics doesn't hit the equality nail on the head, but equally it doesn't generate false theorems, and can always be extended to cover whatever particular gaps you might need to be covered. From the point of view of a post-Gödelian, anti-Platonist philosophy, that is all one wants.
- Reduced distinguishability: why does that matter in an intuitionistic logic? Where there is no proof of equality and the semantics fails to have enough predicates to disprove equality, then there is a truth-value gap. — Charles Stewart (talk) 17:50, 19 June 2009 (UTC)
- I view second-order logic as a fundamentally semantic endeavor, because syntactically it is not distinguishable from multisorted first-order logic. The same syntactic definition of an equality-like operation would go through just as well without the words "second-order". So the key benefit I see to using second-order logic would be to define the true equality relation. This is how I initially read JRSpriggs' first comment above. Later I realized he might mean an equality-like relation that would work with Henkin models.
- In intuitionistic logic, distinguishability still matters, in its own way (the apartness relation on real numbers, for example, or Markov's principle). But intensional identity is taken for granted; if I have two objects, I know whether I explicitly defined them in the same way. So, from that point of view, it seems less interesting to try to define identity externally.
- The Tait/Coquand comment isn't something I have seen before; is there somewhere I can read about that? My knowledge of constructive mathematics is not at all complete. — Carl (CBM · talk) 21:58, 19 June 2009 (UTC)
- Tait's 1994 essay is collected in The provenance of pure reason (2005), and isn't summarised well in any online source I know of. Section 3.2 of Coquand's Computational Content of Classical Proofs (ps) shows that if you add AC and PEM to HAω, you can prove the existence of non-recursive functions. AC is mandated by the usual interpretation of the intuitionistic connectives. PEM is more problematic, since it is incompatible with the disjunction and existence properties, but it is perfectly compatible with the BHK interpretation of the other connectives. So separately, AC and PEM are not poison for constructive interpretation, whilst their combination is. Realizability interpretations can still be used to give computational interpretations of systems such as HAω+AC+PEM.
- Distinguishability: the point I tried to make is that imperfect distinguishability is not at all undesirable in intuitionistic semantics. Intensional identity is not about mathematical entities being presented identically: "2+2" and "the square root of 16" are distinct, intensionally-equal number presentations. There is an issue with the definition of equality JRSpriggs gave in intuitionistic logic, which is that in a system like the calculus of contructions it gives you the beta rules governing proof equivalence for proofs of equality but not the eta rules, but otherwise the defined equality works just the same in practice as the purpose-built rules that Martin-Lof gave.
- As I think you know, I have a problem with what you call full semantics: in essence this is saying that there is a maximal consistent set of formulas to interpret a given, consistent, second-order language in. But we have great freedom in how we complete the theorems to get the maximal sets. Talking in terms of models is more honest, to my way of thinking, than what seems to me a three-way confusion of syntax, semantics and metaphysics.
- As an aside, I note that have no Mass problem article, which were an important influence on how I understand maximal consistent sets. — Charles Stewart (talk) 08:30, 20 June 2009 (UTC)
- Thanks a lot for the reference. I think the widespread belief among mathematicians that there is a single well-defined universe of set theory translates naturally to a a belief that that there is a well-defined full semantics for second-order logic. One can be (rightly) suspicious of that belief, but it's hard to accept it for set theory only. On the other hand, it is also possible to view full second-order semantics as disquotational, rather than set-theoretic. I think that approach has more appeal from the viewpoint of constructive mathematics. But, like I explained above, I view second-order logic with Henkin semantics as "second-order" in name only. — Carl (CBM · talk) 12:02, 20 June 2009 (UTC)
- I guess the argument is running out of steam, but there are still a couple of points I want to make. I should emphasise that I don't have the same objection to Platonism applied to the study of particular families of mathematical structures, e.g., large cardinals, real number line, &c; my problem is its application to a pure logic: logic is supposed to be about exposing hidden assumptions and teasing concepts apart, not brushing them all together and sweeping them under the carpet. And the `disquotational' theory has the advantage in numbers of important applications, starting with the (non-disquotational, nonconstructive, foundationally indispensible) case of second-order arithmetic. What does the full semantics have? Classical definition of the real number line is important, certainly, but does it benefit from being presented formally without a first-order interpretation, say in ZFC?
- Article relevance: wouldn't it be better to have the article be a more even-handed, application-focussed article dealing with both kinds of semantics side-by-side? — Charles Stewart (talk) 06:55, 21 June 2009 (UTC)
- The article does deal with both semantics side by side; see the "semantics" section. When full semantics are discussed in other places, this is usually explicitly mentioned. However, we need to keep in mind that when most authors write "second-order logic" they mean full semantics; these semantics are the reason people are interested in second-order logic in the first place. The section on "history and disputed value" does need to be rewritten to be more clear and thorough; I never got to it, which is why the maintenance tag is still there. — Carl (CBM · talk) 14:54, 21 June 2009 (UTC)
- Charles, the SEP article on higher-order (classical) logic shows that order 3 and higher is equivalent to order 2.[1]. That seems worth describing in this article. The Dialectica interpretation (Gödel's system T if I have the terminology correct) is different and I think there is a theorem cited in the Avigad/Feferman reference of Carl's article, that the functions in that system are the same as the provably total functions of second-order arithmetic. 66.127.53.204 (talk) 10:47, 14 October 2009 (UTC)
- Higher-order logics are interpretable in second-order logic, but they are not equivalent to second-order logic, since the domain changes. For example, there are sets of natural numbers that are third-order definable in the standard model of arithmetic, but not second-order definable. — Emil J. 10:43, 15 October 2009 (UTC)
- Charles, the SEP article on higher-order (classical) logic shows that order 3 and higher is equivalent to order 2.[1]. That seems worth describing in this article. The Dialectica interpretation (Gödel's system T if I have the terminology correct) is different and I think there is a theorem cited in the Avigad/Feferman reference of Carl's article, that the functions in that system are the same as the provably total functions of second-order arithmetic. 66.127.53.204 (talk) 10:47, 14 October 2009 (UTC)
Popularity contest
[edit]I searched for ["second-order logic"] on Google Scholar, and made a not-very-rigorous analysis of the results, trying to see what semantics were being used. Of the 20 I judged that:
- 10 were talking about monadic second-order logic, with some, the papers by Courcelle, mentioning the full semantics, but most talking about model checking, which is generally restricted to decidable, and mostly finitary problems.
- 4 appeared to be talking characterising compexity classes in terms of second-order semantics, which generally means model checking.
- 3 appeared to be talking about the full semantics for the full theory of second-order logic; one or two of these might have been using the semantics informally;
- 1 talked of both full and Henkin semantics side by side;
- 1 was talking about the Henkin semantics only;
- 1 was talking about second-order language without reference to any semantics.
Since for finitary strucrures, Henking and full semantics agree, I was a little surprised by the results. They are, as expected, skewed towards computer science applications, but I was surprised to the extent to which these were tilted towards decision problems.—Preceding unsigned comment added by Chalst (talk • contribs) 11:35, 22 June 2009
- A google books search turns up some additional resources. Many of them will involve Shapiro or Boolos, who each have advocated the study of second-order logic. But even others (I found a book by Landman on Google books) will carefully explain that it is the full semantics that make second-order logic differ from first-order logic, and in particular those semantics are what make the completeness, compactness, and Löwenheim–Skolem theorems fail. — Carl (CBM · talk) 12:51, 22 June 2009 (UTC)
- P.S. I also noticed on Google books that JRSpriggs' observation about defining identity is on p. 171 of Logic, Language, and Meaning by L. T. F. Gamut. — Carl (CBM · talk) 12:51, 22 June 2009 (UTC)
Please don't use Unicode symbols
[edit]On my computer in can read "∀P ∀x (x ∈ P ∨ x ∉ P)" at the beginning of the article. Thats not very helpful. Five squares instead of symbols. And I've got a quite new computer. The english Wikipedia is used also in Africa, India or whereever. Not only in the UK and the USA. Please use <math> ... </math>. Lipedia (talk) 15:38, 28 July 2009 (UTC)
- If you are unable to read the formula which you quoted, it is equivalent to this:
- . JRSpriggs (talk) 23:49, 29 July 2009 (UTC)
- Thanks. I'll replace it there. Lipedia (talk) 18:34, 30 July 2009 (UTC)
Could someone cite original paper
[edit]Could someone cite original paper with proof that EXPTIME is the set of languages expressible by second-order logic with an added least fixed point operator? (like in case for NP) Letac (talk) 10:15, 15 March 2010 (UTC)
What does the "second" in second-order logic refer to?
[edit]I'm confused about what defines a "second" order logic. It seems like first-order logic is because there is one "type" of quantifier over "individuals" in the domain of discourse, and second-order logic allows a "second" type of quantifier. The article suggests that this second quantifier is over sets of individuals over the domain of discourse but the article then goes on to say that second order logic also allows a sort of variable that ranges over all k-ary relations on the individuals and a sort of variable that ranges over functions that take k elements of the domain and return a single element of the domain. Doesn't that imply four different types of quantifiers in total? Or does second order logic only allow two to be used for a given discussion? I'm not a logician but I have an extensive mathematical background and I want to learn formal logic. I am probably representative of the readers who will be interested in this article. As such, I don't think this article adequately explains what defines second-order logic from first-order logic. All I gathered was that there is at least one additional "type" of quantifier from first-order logic. Jason Quinn (talk) 16:11, 19 March 2010 (UTC)
- As the lede says, one difference is that second-order logic has variables to quantify over subsets of the domain of discourse. There are also additional types of quantifiers, as described in the Syntax section. Every type of variable has two quantifiers, universal and existential.
- The real difference is not in the syntax, though (this can be faked in multi-sorted first-order logic). The difference is in the semantics, as discussed in the Semantics section. — Carl (CBM · talk) 01:54, 20 March 2010 (UTC)
- Thanks for the reply but I still don't understand the reason for the name. I understood from the article that second-order logic has variables to quantify over subsets of the domain of discourse. At first, I thought, "Ah ha!" that's the additional thing that causes the difference between first and second order logic; but, those additional quantifiers you and the article mention are the source of my confusion because they seem to imply that the word "second" in second order is not just referring to an additional type of variable (because there are now at least 4). Plus, while allowing other semantics may be a big difference between first and second order logic, I fail to see how it accounts for the name either. Jason Quinn (talk) 17:30, 20 March 2010 (UTC)
- I'm not sure what you mean. The name is what it is; it's very traditional. If you understand what's going on, then you just need to attach "second order logic" to that concept. I'm certain the name was chosen because second-order logic permits quantification over subsets, while first-order permits quantification over only elements. Third-order logic, in turn, permits quantification over sets of subsets, and over other objects of similar type. For each n, n-th order logic can be viewed as a fragment of type theory.
- The thing is that all the variables that are allowed in second-order logic are of a few particular forms:
- Variables for individual elements; these are first-order variables.
- Variables for sets of elements, and more generally, for each k there are variables for k-ary relations. Sets are unary relations. All these are second-order variables.
- For each k, there are variables for functions that take k elements as input and return a single element as output. All these are also second-order variables.
- All of these are very similar in terms of type theory. Also, if one works in arithmetic, it is only necessary to add variables for subsets, since all the others can be coded by subsets. Or one can just add variables for unary functions. But for arbitrary theories, the sort of coding that can be done in arithmetic is not possible, and so the most general second-order logics includes all these types of variables.
- The thing is that all the variables that are allowed in second-order logic are of a few particular forms:
- I do not know a reference for whoever was the first person to use "second-order logic" as a term, so I can't be more specific about why that term was chosen. — Carl (CBM · talk) 02:22, 21 March 2010 (UTC)
- An expansion in terms of sets (zeroth order), plus sets of sets (first order), plus sets of sets of sets (second order) and so forth makes sense to me regarding the origin of the naming scheme. What still confuses me are those k-ary variables and k-element function variables. Why are they admitted into second-order logic? Are they a necessary component of second-order logic, or is second-order logic incomplete without them? Are there other "non-set" variables in second order logic or are they the only ones? What new ones appear in third order logic? I'm starting to think that perhaps they are needed such that one can, for example, allow individual elements of a subset variable to vary; but it's not clear to me that they must be used for a consistent logic. Thank you for your time and patience, Carl. I am not trying to be dense on purpose. Jason Quinn (talk) 15:36, 22 March 2010 (UTC)
- This page Second-order and Higher-order Logic defines second order logic as, "an extension of first-order logic where, in addition to quantifiers such as 'for every object (in the universe of discourse),' one has quantifiers such as 'for every property of objects (in the universe of discourse).'". It then goes on to say that in still higher order logic, one can "add 'super-predicate' symbols, which take as arguments both individual symbols (either variables or constants) and predicate symbols. And then we can allow quantification over super-predicate symbols. And then we can keep going further.". It's still all rather confusing to the uninitiated. It goes on to warn that "in the literature two different ways of counting the order. According to one scheme, third-order logic allows super-predicate symbols to occur free, and fourth-order logic allows them to be quantified. According to the other scheme, third-order logic already allows quantification of super-predicate symbols." Jason Quinn (talk) 15:43, 22 March 2010 (UTC)
- This Second Order Logic Notes PDF gives an example that seems to help and perhaps a similar item would be good for the article. It shows an example of a statement that cannot be given in first-order logic. For example if C stands for "is a cube", then is a perfectly valid statement in first order logic; BUT if P is some unspecified property, you cannot write in first order logic but this is valid in second-order logic. Jason Quinn (talk) 15:53, 22 March 2010 (UTC)
Distinct page for descriptive complexity
[edit]A more detailed explanation of "Applications to complexity" is made on SO (complexity), do you think we should keep this section ? I guess just putting a link onto the page for SO over finite model(complexity) would be enough. No ? —Preceding unsigned comment added by Arthur MILCHIOR (talk • contribs) 23:58, 11 July 2010 (UTC)
- The main difficulty with the text here is that it is very imprecise. I think that some mention of the applications of second-order logic to complexity are relevant here, for completeness' sake. So, I have always left the current, dodgy text in place with the hope that someone will eventually fix it. Please take the following as a set of suggestions; any improvements at all would be welcome.
- (1) The section of this article titled "Power of the existential fragment" comes across (to my ear as a mathematical logician) as almost gobbledygook. For example, the sentence
- Over (possibly infinite) words w ∈ Σ*, every MSO formula can be converted into a deterministic finite state machine.
- leaves out the main point: a deterministic machine with what property?
- (2) The article SO (complexity) is also going to be difficult for novices, although it's in a much earlier stage of development. For example, when that article says
- In descriptive complexity we can see that it is equal to polynomial hierarchy, extended with some operators it can be exactly equals to some well-known complexity class.
- it does not actually mean "equal". But I am not familiar with the area to state off the top of my head exactly what it does mean.
- (3) Also, that article does not appear to mention the crucial fact that the semantics have to be limited to finite structures in order to put things into the realm of complexity theory. But how exactly are the semantics limited? That is a key point that is not discussed in either article.
- — Carl (CBM · talk) 00:29, 12 July 2010 (UTC)
- (1)Working on infinite words makes no sens to us. I don't understand what you don't understand
- (3) when I wrote it, I wrote that second order is an extension of first order, and that every definition one may need are on the FO (complexity) page. I don't think it has any interest to copy paste the very same definition over both page. I wrote it in the "Definition" section, may be I should write it down in the introduction. What do you think ?
- (2) I may want to rewrite it as: "the language recognised by those classes of Second order sentances are well-known classes of computational complexity."
- Arthur MILCHIOR (talk) 12:30, 12 July 2010 (UTC)
It seems EmilJ modified the subsection and put a link. I guess that's ok and enough things, I doubt there is more need to really change this subsection anymore. As you can see on SO (complexity), the descriptive complexity version of SO takes a lot of place to explain, so I guess that it would be useless to try to put all of this informatioon on this "second order" page —Preceding unsigned comment added by Arthur MILCHIOR (talk • contribs) 13:42, 12 July 2010 (UTC)
Completeness
[edit]Is there an example of universally valid (in standard semantics), but unprovable second-order formula?
Eugepros (talk) 08:59, 29 June 2011 (UTC)
- You have to fix the proof system first. As the article says, there is a finite second-order theory T whose only model is the real numbers if the continuum hypothesis holds and which has no model if the continuum hypothesis does not hold. So either the corresponding sentence or its negation is a validity that is unprovable in our usual deductive system. In general there is no complete and effective deductive system for full second-order semantics, because if there was then we could add it to the axioms for second-order Peano arithmetic to get an effective, complete axiomatization of arithmetic, which is impossible by the incompleteness theorem. — Carl (CBM · talk) 11:36, 29 June 2011 (UTC)
OK, let's fix the proof system. It might be "the weakest deductive system", described in the article. Concerning theory T: Did I understand it right, that it is the system with additional (non-logical) axioms? I asked about "universal validity" and meant that we hadn't at our disposal non-logical axioms. If you meant the sentence "The theory T has a model", then why cannot we immediately consider the sentence "The continuum hypothesis holds" instead? It's surely unprovable, but I have doubts that it's universally valid...
Eugepros (talk) 07:06, 30 June 2011 (UTC)
- The following can be expressed as a sentence of second-order logic:
- For all X, a, b, f, g, R such that X is an uncountable set, a and b are elements of X, f and g are functions from X to X, R is a linear order relation on X, and (X,a,b,f,g,R) is a complete ordered field: for every uncountable set Y there is a surjection from Y to X.
- This sentence is logically valid in full second-order semantics if the continuum hypothesis holds and is not universally valid if the continuum hypothesis does not hold. If you change the conclusion to say "there is an uncountable set Y such that there is no surjection from Y to X" then that alternate sentence is a logical validity if CH fails. But neither of these will be provable in the "weakest deduction system" described in the article, or even in the stronger augmented deduction system, because those systems are all weaker than ZFC and neither of the sentences I have mentioned here is provable in ZFC. — Carl (CBM · talk) 11:42, 30 June 2011 (UTC)
I understand that your sentence (let's call it S) is the logical equivalence of CH. But I don't see where is an example of universally valid unprovable sentence. As far as I can see, neither S, nor CH is universally valid. The sentence is (obviously) universally valid, but I guess it's provable.
Eugepros (talk) 07:59, 1 July 2011 (UTC)
- In any structure for full-second order semantics, if the domain is uncountable, then there are indeed many (X,a,b,f,g,R) that are complete ordered fields, and they are all isomorphic to the real line, so if CH holds then the cardinality of X is , so every uncountable set has a bijection to X. Thus that sentence is a logical validity in full second-order semantics if CH holds. The sentence holds in every uncountable model for the reason I just said, and in every countable model because the 'if' is vacuously true. — Carl (CBM · talk) 10:57, 1 July 2011 (UTC)
I don't understand a "logical validity" constrained by a precondition like "CH holds". In my opinion an assertion can be logically valid only if it's true in ANY conceivable context. If we find the interpretation in which S is false ("CH doesn't hold"), then S is NOT logically valid. What is wrong with my opinion?
Eugepros (talk) 12:37, 1 July 2011 (UTC)
- "Logical validity" means "true in every interpretation", and an interpretation for a signature in full second-order semantics consists is a domain of discourse and interpretations of the symbols in the signature. If CH is true then the sentence I mentioned holds in every interpretation, and so it is a logical validity. Look at the contrapositive: the only way that sentence can fail is for there to be an interpretation that has a copy of the real line which does not have cardinality . That would mean the real line does not have cardinality , which would mean that CH was not true. We cannot change the reality of whether CH holds or not, and if CH holds we cannot find an interpretation in full second-order semantics in which CH does not hold. CH is independent of ZFC but it is not independent of full second-order semantics. — Carl (CBM · talk) 12:45, 1 July 2011 (UTC)
Peano proved that full second-order induction is categorical, i.e., all of models are isomorphic to the standard natural numbers. So full secord-order induction proves all propositions that hold for the standard natural numbers. Indeed this is the very definition of what it means to hold for standard natural numbers because there is no other independent characterization. 71.198.220.76 (talk) 17:39, 2 July 2011 (UTC)
- Peano's axioms with full second-order induction are categorical in second-order arithmetic, but they don't give a complete proof system. The incompleteness theorems shows there can be no effective deductive system that allows us to prove all the second-order consequences of the Peano axioms. The key point is that although full second-order semantics are stronger then first-order semantics, proof in second-order logic is no different than proof in first order logic. — Carl (CBM · talk) 23:10, 2 July 2011 (UTC)
- In think that the point was that full second-order induction provides all the information that we can possibly have. It may be the case that this information is incomplete in the sense there there is a proposition that can be neither proved nor disproved. 71.198.220.76 (talk) 11:41, 4 July 2011 (UTC)
- I agree. I responded to your claim "So full second-order induction proves all propositions that hold for the standard natural numbers.", which is not correct, only because that claim might mislead someone who reads this page at a later time. — Carl (CBM · talk) 13:27, 4 July 2011 (UTC)
Carl, I see that you had to use conditional sentence: "If CH is true then the sentence I mentioned holds in every interpretation". And what is the difference, for example, with conditional sentence like this: "If the addition operator is commutative then the sentence holds in every interpretation"?
Concerning: "We cannot change the reality of whether CH holds or not", - I don't understand where could you find any "reality" here? As far as I can see, there is an abstract logical concept of "set" - with or without CH. It's only the question of our interpretation: to belive in existence of uncountable cardinality less then continuum or not.
Concerning: "CH is independent of ZFC but it is not independent of full second-order semantics", - I don't understand what's that a "full second-order semantics" you talk about? Where in this "semantics" can we find an assertion "CH holds" (or its negation)? As far as I can see, it's nothing more then an axiom, just like the axiom of commutativity of addition.
Eugepros (talk) 07:53, 4 July 2011 (UTC)
- I don't think you have quite got the meaning of full second-order semantics yet. Rather than trying to teach you what's going on, I'm going to recommend that you consult a reference like Shapiro's Foundations without foundationalism, which is a book-length treatment of second-order logic. It discusses the example of the continuum hypothesis in some detail. One thing to be aware of is that Shapiro uses the phrase "logical truth" instead of "logical valdity", with the same meaning.
- The difference between "if CH is true" and "if the addition operator is commutative" is that, in full second-order semantics, we have to interpret "all subsets" as all subsets, but we do not have to interpret a fixed symbol like "+" as the standard addition operation in general. — Carl (CBM · talk) 13:17, 4 July 2011 (UTC)
Yes, I haven't got a meaning of this "semantics" yet. But I hoped that I could get unambiguous answer for the simple question: "Is CH true in that semantics?" Until the answer is unknown, I'm unable to treat a logical equivalence of CH as "logical validity".
And I still don't see essential difference between optional axioms "CH is true" and "addition is commutative". What of it, if we say: "all subsets"? It's just words. Is an "uncountable cardinal less than continuum" one of those "all subsets"? These words say nothing about it.
Now you almost convinced me that Quine was right "thinking of second-order logic as not logic, properly speaking". ;-) To the moment I come to the following conclusions:
We haven't an unambiguous example of logically valid, but unprovable sentence. But there are some reasons to say that such a sentence "exists". This is not a problem, classical logic knows a lot of things, which "exists", but nobody knows an example of it.
Eugepros (talk) 07:06, 5 July 2011 (UTC)
- We have two sentences, one of which is a validity.
- You can also make concrete examples using the incompleteness theorems, but these are more dependent on exactly which deductive system you have assumed. These sentences will be of the form "Every model of the second order Peano axioms satisfies φ", where φ is a carefully chosen arithmetical sentence. For example, if we make φ the sentence "Con(ZFC)", which is constructed by applying Goedel's incompleteness theorem to ZFC, then the quoted sentence is a second-order logical validity, but it is not provable in ZFC (which includes the usual deductive system for first-order logic). — Carl (CBM · talk) 12:00, 5 July 2011 (UTC)
"... one of which is a validity" - this can be said about absolutely each and all sentence or its negation.
It isn't obvious that the sentence: "Every model of the second order Peano axioms satisfies Con(ZFC)" - is a logical validity. As far as I can see, it's the conclusion from the categoricity of the second-order PA? But if it's a conclusion, then it should be deduced (in some second-order deductive system). Shouldn't it?
Eugepros (talk) 06:46, 6 July 2011 (UTC)
- It isn't true that every sentence, or its negation, is a validity. If a sentence is true in some models and false in some models then neither it nor its negation is a validity.
- Being a logical validity, under a certain semantics, simply means that a sentence is true in every model for that semantics. The statement "Every model of the second-order Peano axioms satisfies Con(ZFC)" can be written as a single sentence Φ of second-order logic. It says "For every set X and every function S from X to X, (X,S) satisfies all the Peano axioms". The only relation symbol in Φ is equality, and there are no nonlogical symbols in Φ at all.
- Now for every second order structure M (which may be a model of any second-order theory, in any language including equality), we can ask whether Φ is true or not. In other words, is it true that for every (X,S) in M, if (X,S) is a model of the Peano axioms then (X,S) satisfies Con(ZFC)? It is true, because the second-order Peano axioms are categorical in full semantics, so every such (X,S) is isomorphic to the standard model, and the standard model does satisfy Con(ZFC). Thus Φ is true in every model for full second-order semantics, so it is a logical validity. — Carl (CBM · talk) 11:42, 6 July 2011 (UTC)
Is the CH a sentence of second-order language? If so, is either the CH or its negation "true in ALL models"? What really prevents us from constructing BOTH models: where the CH is true and where CH is false?
You've said, that PA2 has only one model (up to isomorphism). Is the CH true in that model?
Conserning "Every model of PA2 satisfies Con(ZFC)" - I am not ready to discuss its validity, because it's based on implicit axiom of ZFC's soundness. And I am not ready to accept its non-deducibility, because you a moment ago deduced it from categoricity of PA2. ;)
Eugepros (talk) 10:21, 8 July 2011 (UTC)
- As I said above, CH is expressible in second-order logic via the sentence
- For all X, a, b, f, g, R such that X is an uncountable set, a and b are elements of X, f and g are functions from X to X, R is a linear order relation on X, and (X,a,b,f,g,R) is a complete ordered field: for every uncountable set Y there is a surjection from Y to X.
- This is true in every countable model, and it is true in every uncountable model if CH is true and false in every uncountable model if CH is false. So it is a validity if CH is true. If you don't see that, you should work it out as an exercise. I'm afraid this discussion has gone on for too long, since it's not related to editing the Wikipedia article. I recommend finding an expert closer to you to ask in person, or reading Shapiro's book, which explicitly discusses this example. — Carl (CBM · talk) 11:17, 8 July 2011 (UTC)
Few more words about consistency of Second-order logic
[edit]I become more and more convinced that Second-order logic is an absurd thing, which resurrects papadoxes of naive set theory. Let us consider the formula: . Is it correct second-order formula with single free variable X? Obviously yes. Does such an X exist? Obviously not, because its existence would imply Russell's paradox. But we can use this X in formulas! What does it mean: ? Is it true or false? Any one of these answers is paradoxical. We cannot assign a truth value to сorrect sentence of language?
Eugepros (talk) 06:27, 15 August 2012 (UTC)
- The formula is not a sentence, it has a free variable X. Thus it has no truth value until you fix an interpretation of X. If you do so, the formula is either true (if the interpretation of X is a nonempty set) or false (otherwise). It’s not clear how you intend to connect the variable X in this formula with the X in the formula you wrote earlier, you have to somehow combine the pieces in one formula, otherwise you are making no sense. If your intention was to consider the formula , this is again a formula with one free variable X, and it happens to be false for every interpretation of X (because of the first conjunct). If, on the other hand, your intention was to consider the formula , then it is true for every interpretation of X, for the same reason.
- Your argument is not specific to second-order logic. Here’s a first-order version, if it helps you see its faultiness. Let us consider the formula: . Is it correct first-order formula with single free variable u? Obviously yes. Does such a u exist? Obviously not, because its existence would contradict the law of identity. But we can use this u in formulas! What does it mean: ? Is it true or false? Any one of these answers is paradoxical. We cannot assign a truth value to сorrect sentence of language?
Further to the previous question: Is the sentence NOT a tautology?
Eugepros (talk) 10:10, 15 August 2012 (UTC)
- Yes, it is a tautology.—Emil J. 12:34, 15 August 2012 (UTC)
Thank you, Emil, I understand. The sentence , where is a formula with one free variable X, is a tautology, because if such an X doesn't exist, then premise of the implication is always false. Thus, combining it with , which is also a tautology, we have, that is a tautology.
Eugepros (talk) 13:58, 15 August 2012 (UTC)
separate article on Henkin semantics?
[edit]It would be helpful to have a separate article Henkin semantics or perhaps "Henkin models". Tkuvho (talk) 18:57, 20 March 2013 (UTC)
- I don't immediately see that it would be very helpful. The only place the term comes up is in the context of second-order logic and similar type theories, and I don't see how splitting it into a separate article would make things easier to explain here or there. Is there some topic in particular that should be covered but is not covered here? One particular concern I have is that there is no general definition of "Henkin semantics" apart from the specific definitions in second-order logic and type theories - so there will be few to no sources on "Henkin semantics" specifically, just sources on second-order logic or type theory. We'd have to extrapolate from those, but then the article would be at risk of being classified "original research" even if the content is completely routine. — Carl (CBM · talk) 19:26, 20 March 2013 (UTC)
- At Skolem's paradox it is asserted that Henkin model is a way of properly understanding the paradox, the latter page being redirected to a subsection of second order logic, but I don't find the link particularly useful in explaining the assertion. Perhaps there is some other way of clarifying this, perhaps at the Skolem page itself. Tkuvho (talk) 12:16, 21 March 2013 (UTC)
- That paragraph is talking about Henkin's proof of the completeness theorem for first-order logic, which is related-in-some-way but not formally the same as Henkin semantics for second-order logic. I copyedited the paragraph. — Carl (CBM · talk) 13:40, 21 March 2013 (UTC)
- At Skolem's paradox it is asserted that Henkin model is a way of properly understanding the paradox, the latter page being redirected to a subsection of second order logic, but I don't find the link particularly useful in explaining the assertion. Perhaps there is some other way of clarifying this, perhaps at the Skolem page itself. Tkuvho (talk) 12:16, 21 March 2013 (UTC)
Syntax and fragments
[edit]Shouldn't the last sentence of the enumeration be talking about a second-order term:
For each natural number k there is a sort of variables that ranges over all functions taking k elements of the domain and returning a single element of the domain. If f is such a k-ary function variable and t1,...,tk are first-order terms then the expression f(t1,...,tk) is a first-order term. — Preceding unsigned comment added by Fdehne (talk • contribs) 10:44, 13 August 2015 (UTC)
- A first-order term is the kind of thing that a function can be applied to. Because we can write "g(f(x))", the expression "f(x)" must be a first-order term. Similarly, an expression such as "f(x)" is evaluated, in a given interpretation, to a first-order element (once "x" has an interpretation as a first-order element). So "f(x)" is a first-order term. Only in third-order logic would we have a function (third-order) that could be applied to a second-order term. — Carl (CBM · talk) 12:09, 13 August 2015 (UTC)
least-upper-bound
[edit]- (∀ A) ([(∃ w) (w ∈ A) ∧ (∃ z)(∀ u)(u ∈ A → u ≤ z)]
- → (∃ x)(∀ y)[(∀ w)(w ∈ A → w ≤ y) ↔ (x ≤ y)])
Nothing in this formula is telling us that x is also an upper bound. --Igor Yalovecky (talk) 07:16, 31 October 2015 (UTC)
- In the final clause, there is an if and only if. So consider what happens when $y$ takes the same value as $x$. Then $x \leq y$ holds, so $(\forall w)[w \in A \to w \leq x]$ must also hold, so $x$ must be an upper bound for $A$. — Carl (CBM · talk) 12:16, 31 October 2015 (UTC)
Axiomisation of second order logic
[edit]There are pages that actually give an axiomisation for deductive systems based on first order logic, like Hilbert system, Natural deduction or Sequent calculus. These pages actually list out axioms completely and properly explain how the deductive system works. Is there anywhere which does this for second order logic?
All this page says is that the single sort of first order logic (i.e. individuals) is extended with a number of different sorts (k-ary relations and k-ary functions, for every natural k). And so I'm assuming that the axioms that govern quantification are applied to these sorts as well. But how would you deduce something like:
where , , are unary relations?
It seems to me that there need to be some sort of extra axioms of the form:
- If is some first order term where its free variables are individuals and functions , then
, where is a function variable of arity n
- If is some second order formula where its free variables are individuals , functions , and relations , then
, where is a relation variable of arity n
Should things like this be explained more specifically on this page? --AndreRD (talk) 10:52, 16 June 2016 (UTC)
Excessive definition
[edit]"A sort of variables that range over sets of individuals."
"For each natural number k there is a sort of variables that ranges over all k-ary relations on the individuals"
Isn't the first merely a particular case of the second with k=1?
109.65.3.224 (talk) 11:51, 22 October 2019 (UTC)
Beautifully written examples section, wish more were like it - but maybe a problem?
[edit]Seriously, my thanks to the author of that.
But it has
In first-order logic a block is said to be one of the following: a cube, a tetrahedron, or a dodecahedron:[3]: 258
¬∃x (Cube(x) ∧ Tet(x) ∧ Dodec(x))
but that doesn't satisfy "one of". You could have an x such that
Cube(x) ∧ Dodec(x)
ie. it is two of them. It's not Tet(x). — Preceding unsigned comment added by 92.7.32.29 (talk) 11:15, 6 March 2022 (UTC)
- Thanks for posting! I agree this is confusing, and I fixed it by tweaking the example. Before, it was talking about the property that *no object has all shapes simultaneously*, but I rewrote it to be the more natural property that no object has two different shapes.
- However, something else is bothering me. The examples section includes the notation "Shape(P)" which appears to be some kind of predicate over set variables. Although I'm primarily familiar with first-order logic, I wasn't aware that such meta-predicates are allowed in second-order logic. The syntax section doesn't seem to discuss these. Is this intended to be some kind of shorthand? Caleb Stanford (talk) 18:39, 6 March 2022 (UTC)
- Assuming the meta-predicate Shape is not actually allowed, then funnily enough, first-order logic is actually a better fit for the example. First-order logic allows having a sort of Shapes, and a sort of Objects, allows the Shape predicate, an Object predicate and the HasShape(x, y) predicate between objects and shapes to denote that an object has a certain shape. Caleb Stanford (talk) 18:41, 6 March 2022 (UTC)
- @Caleb: I haven't the ability to evaluate what you say above though so I'll have to retire (it's past my basic predicate calculus level). Keep it up, and thanks! — Preceding unsigned comment added by 92.19.6.159 (talk) 09:12, 8 March 2022 (UTC)
Examples section presents third-order logic, not second-order
[edit]OK, so I took a look at the corresponding SEP article and confirmed that indeed, meta-variables for "properties of properties" (such as the presented Shape predicate in the example) are not allowed in second-order logic. See this excerpt from SEP:
It is noteworthy that although we have property variables we do not have variables for properties of properties. Such variables would be part of the formalism of third order logic, see §12.
As a result, I placed the "Disputed section" template in the section for now. It needs to be rewritten, but the shape example is much less compelling without a meta-property for shapes, so I fear it needs a substantial rewrite. Thoughts welcome Caleb Stanford (talk) 18:48, 6 March 2022 (UTC)