Wikipedia:Reference desk/Archives/Mathematics/2012 April 13
Mathematics desk | ||
---|---|---|
< April 12 | << Mar | April | May >> | April 14 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 13
[edit]hypoethical
[edit]suppose there is a prime that is the last one within human ken, and afterward there is such a space of empty non-primes that the next prime could not even be represented (as bits) by every atom in our universe, nor be computed algebraically or approximated or in any other ways be reachable.
this seemingly contradicts the density of primes graph we sometimes see, which shows that it tapers off like http://en.wikipedia.org/wiki/File:PrimeNumberTheorem.svg - but could my thought experiment somehow be true anyway, or is there a mathematical reason it couldn't? --80.99.254.208 (talk) 09:04, 13 April 2012 (UTC)
- The Bertrand-Chebyshev theorem tells us that if we have a prime p then the next prime will be less than 2p. Gandalf61 (talk) 09:51, 13 April 2012 (UTC)
- So there is no large gap, but there's still some largest prime small enough to be represented as bits by atoms in the the universe (assuming a finite universe, or we could say in the observable universe), and a smallest that is too large. Rckrone (talk) 14:03, 13 April 2012 (UTC)
- Not sure that concept makes much sense. Why atoms ? Why not electrons, protons and neutrons ? Why not quarks ? Why not include photons ? And no matter which particles you use in your definition, every time a particle is created or annihilated then your number of bits changes by one, which means your range doubles or halves and your "largest prime" changes. Plus the size of the obervable universe is changing. So this "largest prime" is just not a well-defined concept. Gandalf61 (talk) 15:27, 13 April 2012 (UTC)
- I'm not sure why it matters that the definition is somewhat arbitrary. Also not sure why it matters that the number changes. No one said it had to be constant for all time. I don't see what about the number is not well-defined. Rckrone (talk) 01:42, 14 April 2012 (UTC)
- Not sure that concept makes much sense. Why atoms ? Why not electrons, protons and neutrons ? Why not quarks ? Why not include photons ? And no matter which particles you use in your definition, every time a particle is created or annihilated then your number of bits changes by one, which means your range doubles or halves and your "largest prime" changes. Plus the size of the obervable universe is changing. So this "largest prime" is just not a well-defined concept. Gandalf61 (talk) 15:27, 13 April 2012 (UTC)
- For a finite volume containing a finite amount of mass / energy, the Bekenstein bound from quantum mechanics produces an upper bound on the possible number of configuration states that the system can obtain (including things like particle creation and annihilation). Hence, this places an upper limit on the amount of information required to represent the system. For the current volume and mass of the observable universe, this limit implies there are "only" possible states that the universe can exhibit. Dragons flight (talk) 19:12, 13 April 2012 (UTC)
- So anyone with 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 bits of storage outside our universe can just model the whole of our universe. Neat-o. 188.157.185.167 (talk) 20:20, 13 April 2012 (UTC)
- I know it is a common practice, but I have to object to conflating the observable universe with the universe. The universe as a whole may be infinite, and even if it is finite there is no known bound to its size. --Trovatore (talk) 02:34, 14 April 2012 (UTC)
This is not a very rigorous argument, but I find it very hard to believe that for any prime p, there is bound to be another prime less than 2p. I mean take something like 5. It almost fails: 5, 6, 7, 8, 9, 10. And this is where the primes are densest (near the origin). --80.99.254.208 (talk) 16:18, 13 April 2012 (UTC)
- Roughly put, the prime number theorem says the typical distance between primes around p is log(p) where log is the natural logarithm. For example, the typical distance for n-digit primes around 10n is log(10n) = log(10)×n ≈ 2.3×n. This soon becomes very small compared to 10n so if we avoid tiny counterexamples like 5 then the Bertrand-Chebyshev theorem sounds very plausible. It has also been proved Chebyshev. PrimeHunter (talk) 16:51, 13 April 2012 (UTC)
- To me the theorem "sounds" right; Don't know why but it seemed logical. What surprised me more was "In 2011, Andy Loo proved that there exists a prime between 3n and 4n". Somehow, I expected it to be between 3n and 5n... Ssscienccce (talk) 19:03, 13 April 2012 (UTC)
- 80, where you're going wrong is that you're ignoring the number of chances to hit a prime between p and 2p. Yes, the density is higher when p is smaller, but it falls off slowly as p increases, whereas the number of "candidate primes" in the interval increases linearly with p. --Trovatore (talk) 02:04, 14 April 2012 (UTC)
- If you are saying human beings then yes of course there is largest number for us and we'll never be be able to understand a representation of a larger number. It will have to be pretty huge but in essence we are limited just not quite so much as some bird which doesn't notice the difference between six or seven eggs in its nest. That's a function of the finite bounding size of our brains. Dmcq (talk) 10:30, 14 April 2012 (UTC)
- No, I wasn't addressing that point at all. I was addressing 80's difficulty in believing that there's always a prime between p and '2p. --Trovatore (talk) 21:15, 17 April 2012 (UTC)
- I don't know, we can represent numbers like Graham's number as the end result of a particular constructive operation, and say useful things about it even though a explicit binary representation of the number couldn't fit in our known universe. In fact it is pretty easy. is already too large to write down in binary in our universe, but it takes almost no time to write down that expression. It may be hard to find very large numbers that are in any sense useful, but I'm not sure there is any particular limit on how large a number we can find a way of representing. Dragons flight (talk) 19:24, 14 April 2012 (UTC)
- That's the beauty of symbols. We can never write down the decimal expansion of e or pi, but we can refer to them extremely simply. I could define "Jack's Very Large Number" to be Graham's number to the power of Graham's number, to the power of Graham's number, to the power of Graham's number ...... a Graham's number number of times. Impossibly large, yes? But we can still conceive of the possibility of such a number and refer to it by whatever name we choose to give it. -- ♬ Jack of Oz ♬ [your turn] 08:30, 15 April 2012 (UTC)
- If you are saying human beings then yes of course there is largest number for us and we'll never be be able to understand a representation of a larger number. It will have to be pretty huge but in essence we are limited just not quite so much as some bird which doesn't notice the difference between six or seven eggs in its nest. That's a function of the finite bounding size of our brains. Dmcq (talk) 10:30, 14 April 2012 (UTC)
Time complexity of solving a polynomial
[edit]What is the time complexity for computationally solving a polynomial? Is it linear in the degree of the polynomial? I can find articles stating one technique has lower complexity than another, but nothing stating anything absolute. Thanks. --Iae (talk) 14:24, 13 April 2012 (UTC)
- You may want to be more specific about what you mean by "solving". If you allow numerical approximations, then how precise does it need to be to count as a "solution"? If you don't allow numerical approximations, then what exactly do you mean by solving? You should also say exactly how you are measuring the complexity: something like Newton's method will require evaluating the polynomial at various points. Will you say that the evaluations require constant time, or linear in the degree? Staecker (talk) 16:37, 13 April 2012 (UTC)
- I think I'd just be interested in any good resources or articles regarding it generally, perhaps slanted towards a beginner, as I don't have a particular case in mind in order to answer your questions. Approximations are what I'm interested in though. Thanks. --Iae (talk) 16:56, 13 April 2012 (UTC)
- I think it can't be linear or polynomial in the degree of the polynomial, because the number of bits in the coefficients must have an impact on the computing time.
- My restatement of the OP's question coincides with something I've long wondered about, and have been unable to find an answer to in Wikipedia:
- You are given a polynomial of degree n in which the total number of bits necessary to state the coefficients is N. For pre-specified , does such-and-such algorithm in the worst-case scenario get you to within of an arbitrary (not pre-specified) root with a number of computer steps that is bounded above by a polynomial in n and N?
- If you haven't already, you could take a look at Root-finding algorithm#Finding roots of polynomials, which unfortunately as far as I can see does not answer the question for any algorithm, but which compares quite a number of algorithms. Duoduoduo (talk) 21:09, 13 April 2012 (UTC)
Any given method uses different amounts of time for factoring different polynomials of degree n>1. For any method there exist polynomials which cannot be factored in finite time. This is of course worst-case. Bo Jacoby (talk) 06:59, 14 April 2012 (UTC).
- I'm puzzled by your response, Bo. This thread is about finding an approximation to a root for which there may not even exist an expression in radicals (and regardless of whether the polynomial is reducible or irreducible). I don't see what polynomial factorization has to do with it. Duoduoduo (talk) 17:01, 14 April 2012 (UTC)
Yes, I agree. The polynomial has n factors. Knowing the factors is knowing the roots. The task is to compute the roots approximately, say, within ε. A method consists in making an initial guess, s0, and improving it iteratively by some computation, sk+1=g(sk,a0,...,an−1), such as for example Newton's: sk+1=sk−f(sk)/f '(sk). The function g is continuous. Let A be an ε-ball around a root. A is a nonempty open set. The sets g−m(A) are open and nonempty too, because g is continuous. The complex plane cannot be entirely covered by n disjoint open nonempty sets for n>1. So there exists an initial guess for which the method does not converge towards any root. This also means that for a fixed initial guess there exist polynomials which the method doesn't solve. Bo Jacoby (talk) 18:50, 14 April 2012 (UTC).
- Why do you assume A is an open set? Duoduoduo (talk) 18:43, 15 April 2012 (UTC)
- By "ε-ball", he means an open interval of the form (x-ε, x+ε), where x is a root. This is an open set. Staecker (talk) 02:25, 16 April 2012 (UTC)
- A = {x : |x−r|<ε }. An ε-ball around a root r is a disk in the complex plane where r is the center and ε>0 is the radius. Sorry for being unclear. Bo Jacoby (talk) 05:55, 16 April 2012 (UTC).
- By "ε-ball", he means an open interval of the form (x-ε, x+ε), where x is a root. This is an open set. Staecker (talk) 02:25, 16 April 2012 (UTC)
- I understood what you meant by the epsilon ball, by an open set, and by A. My question was why did you focus on an open set in your proof? Why not focus on the set B= {x:|x-r| }? Would the proof still have gone through? Duoduoduo (talk) 15:21, 16 April 2012 (UTC)
- No. The proof relies on the complex plane being a connected space. It is not the union of n>1 disjoint open nonempty sets. The set g−m(A) of initial guesses that converge towards some root is an open nonempty set. The set of initial guesses that converge towards some other root is another open nonempty set. These two sets are disjoint. The n sets of initial guesses converging towards the n roots of the polynomial are disjoint open nonempty sets. Their union is not the whole complex plane, because the complex plane is a connected space. So for any polynomial with different roots there are some initial guesses which do not converge towards any of the roots. For example: solving x2+1=0 by Newton's method doesn't work with a real-valued initial guess. It cannot choose between the two roots i and −i. Bo Jacoby (talk) 07:56, 17 April 2012 (UTC).
Is the winning response in the "Big Number Duel" a paradox
[edit]I've been reading up on the Big Number Duel held at MIT. This analysis of the final answer is a nice description for us non-logicians, but is the final answer not part of its own set, therefore failing Russell's Paradox? That is, the answer itself is (I think...) defined in first-order logic, so it is part of the set of numbers definable in that theory with less than a googol symbols, so the number it defines is contradictory. So then isn't this entry invalid?
An example: We define our number as the first number in the set that is the complement of the set of numbers that can be defined by a logic of 1000 characters or less. A≅φ[1000]; ∃x∈¬A; x∈φ[1000] ⇔ x∈(A ∩ ¬A)≅∅ SamuelRiv (talk) 17:55, 13 April 2012 (UTC)
- Don't you mean second-order set-theory? Quote from article: "by a formula of second-order set-theory containing no primitive semantic vocabulary." Whether the answer as a whole can be represented by second-order logic, I don't know, not really my strong point. Ssscienccce (talk) 18:51, 13 April 2012 (UTC)
- (ec) Well, the page you linked says that the winning entry was phrased in the second-order language of set theory. Essentially this means you're allowed to quantify over proper classes.
- I find that problematic. Proper classes are not individuals note — that wikilink isn't quite right, it's more of a CS article, but maybe better than nothing, for if they were, they would have to be sets. So quantifying over them is odd.
- That said, I do think the winning entry specifies a particular number, though it may be very difficult to find anything out about it other than that it is very large. In particular, questions about whether it is bigger or smaller than the number given by some other well-specified description may well be independent of any foundationally relevant theory known to us. --Trovatore (talk) 18:52, 13 April 2012 (UTC)
Isn't this one bigger: The product of all unique finite numbers m > 1 with the following property: there is a formula φ(x1) in the language of first-order set-theory (as presented in the definition of `Sat') with less than a googol symbols and x1 as its only free variable such that: (a) there is a variable assignment s assigning m to x1 such that Sat([φ(x1)],s), and (b) for any variable assignment t, if Sat([φ(x1)],t), then t assigns m to x1.Ssscienccce (talk) 19:11, 13 April 2012 (UTC)- I'm not sure how that's responsive. You can certainly specify bigger ones. (By the way, that answer is not really in the spirit of the rules — see the "Gentlemen's Agreement" section.) --Trovatore (talk) 19:15, 13 April 2012 (UTC)
- I redraw my off-topic remarks; Given the context, a duel between "Dr. Evil" and "The Mexican Multiplier", I didn't take the question very serious. Ssscienccce (talk) 04:21, 14 April 2012 (UTC)
- I refrained from commenting at first, but am glad you realized. The event was basically an advertisement to consider studying philosophy at MIT for the more T-minded. However, that doesn't mean that the professors should get away with slop, if it is indeed that. The Berry Paradox article is critical, though, as it covers much of our objections here, though in a slightly different context. SamuelRiv (talk) 16:26, 14 April 2012 (UTC)
- I redraw my off-topic remarks; Given the context, a duel between "Dr. Evil" and "The Mexican Multiplier", I didn't take the question very serious. Ssscienccce (talk) 04:21, 14 April 2012 (UTC)
- I'm not sure how that's responsive. You can certainly specify bigger ones. (By the way, that answer is not really in the spirit of the rules — see the "Gentlemen's Agreement" section.) --Trovatore (talk) 19:15, 13 April 2012 (UTC)
- Yes, I meant second-order set theory, but it doesn't wikilink nicely. Yeah, I don't get how you can just say that this is second-order and so the paradox disappears. It seems to instead indicate that this number is not... decideable (is that the term?) in this logic. In the student paper's article on the contest, they said that non-computable numbers showed up as okay, but that's not the same problem as we have with this answer, right? I mean, defining a number by how long it takes the Busy Beaver to complete some string is different from defining it by the complexity of its definition.
- But is it really undecideable? Can we compute a large number using, say, three symbols as a kind of lower bound, like S(S(1)) = 3 (Peano's successor function)? Does it work that way? SamuelRiv (talk) 20:03, 13 April 2012 (UTC)
My contribution.
[edit]Starting from the observation that there are more large numbers than small ones (so that "most numbers are infinite") I would like to produce a description that favors an absurdly large, but finite, number as follows. Let computer a and computer b be capable of infinite space and infinite time calculations. a begins with a large prime p, namely the first prime greater than googleplex, and computes the value (or error condition due to being a malformed program) of every number (interpreted in binary) up to that prime. The largest of these numbers results given to b, and b does likewise,using this number instead of p as a did, and returning the largest of b's results back to a. Meanwhile, as this exchange goes back and forth (i.e. after the first iteration, when 'a' and 'b' both have a number to work with) c does the following: multiplying a and b, it gets a new number. Trying every number up to that number as a binary program, c keeps track of the largest of these results. This 'metaresult', which is guaranteed to be ahead of both a and b 's number by a good margin, ends with the literal a appended to b as its final digits (in decimal), the game ends, and this largest number is returned. I think there is a good probabilistic argument that this OUGHT to happen after a while, this is well-defined, but you'll also be waiting a DAMN sight for it to actually happen. What do you guys think? 188.156.114.13 (talk) 23:34, 14 April 2012 (UTC)
Definition of "e"
[edit]First time I have looked at Wikipedia for alternate definitions of "e." The very first def. is incorrect? e equals the limit as delta x goes to zero of (1+delta x)^(1/delta x). The definition is given as (1 + 1/delta x)^x. I am new to this. Can someone respond? — Preceding unsigned comment added by Llubitz (talk • contribs) 19:01, 13 April 2012 (UTC)
- Where do you see a definition with delta x? e (mathematical constant)#Representations says: . Note . The expression equals the limit as delta x goes to zero of (1+delta x)^(1/delta x). PrimeHunter (talk) 19:16, 13 April 2012 (UTC)
- Or in other words, set x=1/n; than (1+1/n)n is the same as (1+x)1/x Ssscienccce (talk) 19:24, 13 April 2012 (UTC)