Wikipedia:Reference desk/Archives/Mathematics/2010 June 16
Mathematics desk | ||
---|---|---|
< June 15 | << May | June | Jul >> | June 17 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 16
[edit]Could someone please look at this supposed solution to Collatz conjecture
[edit]Could someone please look at this edit and determine if it's legitimate. Seems like a radical change, something that would be obvious to everyone in the field. Shadowjams (talk) 04:20, 16 June 2010 (UTC)
- It's bogus. See [1] and identify relevant points as an exercise. And a valid proof would be in Annals of Mathematics, not sites.google.com. 75.57.243.88 (talk) 05:21, 16 June 2010 (UTC)
- As far as my superficial understanding of the problem can tell, this paper doesn't trigger any of Scott's heuristics. Also, I don't know if it's a contradiction that a discovery will be published informally before we can see it in a journal. -- Meni Rosenfeld (talk) 08:17, 16 June 2010 (UTC)
- It triggers #1 in that it's a PDF and not TeX, that's a bit iffy in that he might have used TeX to generate the PDF. From what I read there is a definite hit on #7 and #8 though. In any case, there are reasons why WP has criteria for reliable sources and keeping it from being a forum for the discussion of dubious claims is one of them. We tend to be lax on this in math articles in general, but an sharp eye should be kept on articles about well known unsolved problems.--RDBury (talk) 12:28, 16 June 2010 (UTC)
- That paper was obviously written in LaTeX. — Carl (CBM · talk) 12:38, 16 June 2010 (UTC)
- I don't even know how to parse "it's a PDF and not TeX". What would you expect from something that is TeX? A source *.tex file? DVI? That would be suspicious. -- Meni Rosenfeld (talk) 13:50, 16 June 2010 (UTC)
- Whatever, that wasn't my main point anyway.--RDBury (talk) 03:21, 17 June 2010 (UTC)
- It triggers #1 in that it's a PDF and not TeX, that's a bit iffy in that he might have used TeX to generate the PDF. From what I read there is a definite hit on #7 and #8 though. In any case, there are reasons why WP has criteria for reliable sources and keeping it from being a forum for the discussion of dubious claims is one of them. We tend to be lax on this in math articles in general, but an sharp eye should be kept on articles about well known unsolved problems.--RDBury (talk) 12:28, 16 June 2010 (UTC)
- As far as my superficial understanding of the problem can tell, this paper doesn't trigger any of Scott's heuristics. Also, I don't know if it's a contradiction that a discovery will be published informally before we can see it in a journal. -- Meni Rosenfeld (talk) 08:17, 16 June 2010 (UTC)
- I had a quick look and the end with its cardinalities looks wrong to me but it should be left to a professional to check. The main point here though is that Wikipedia cannot accept WP:Original research even if it is correct so it should not be in the article. It needs to be properly checked and published first especially as it is a long standing conjecture. Dmcq (talk) 08:34, 16 June 2010 (UTC)
In general, it simply isn't our place here to evaluate these things. If the proof is correct, it will be discussed in public relatively soon on professional mailing lists, and we can use those to add a (qualified) statement about the solution to our article. Until then, we shouldn't link to a self-published preprint that claims to solve a famous open-problem. One sign that the proof is correct: someone else will rewrite it in a clearer way. For an example of this in history, look at the "PRIMES is in P" proof, which was recognized very quickly as correct and rewritten by Dan Bernstein like so [ [2].
That being said, this paper is not obviously bad. The author clearly has some mathematical background and the exposition is clear enough that the results can be checked in principle by someone with the background and patience to do so. And FWIW I am a professional mathematician. — Carl (CBM · talk) 12:58, 16 June 2010 (UTC)
- The abstract rings some alarm bells though: "The proof of Collatz theorem uses an astonishing involutive monoid automorphism... " (my emphasis) and "Have fun !" AndrewWTaylor (talk) 13:39, 16 June 2010 (UTC)
- If someone had proved it I'd be quite happy for them to drop anal-retentive mode for a moment. Personally I'd prefer a bit more of the tongue in cheek and personal stuff like you get in Maths Intelligencer. Dmcq (talk) 15:10, 16 June 2010 (UTC)
- FWIW, I was indeed rather astonished that the thing makes a well-defined automorphism.—Emil J. 16:32, 16 June 2010 (UTC)
I'm afraid that in the middle of page 5, is not, in fact, compatible with the monoid structure of M (i.e., it is not a congruence): for example, , but , as .—Emil J. 16:26, 16 June 2010 (UTC)
Partitions of n
[edit]Is there an explicit formula for the number of partitions of n, i.e. for p(n)? I've read the article and understand that p(n) is given by the coefficient of xn in
Let's say I wanted to know p(n) for all n ≤ 49; would I only to need to compute the Taylor Series of
- •• Fly by Night (talk) 13:59, 16 June 2010 (UTC)
- Yes, because the rest of the product is 1 + terms of exponent greater than 49.—Emil J. 14:48, 16 June 2010 (UTC)
- However, I think the more efficient way is the one described in the "Intermediate function" section. -- Meni Rosenfeld (talk) 15:06, 16 June 2010 (UTC)
- Maybe, maybe not. It depends on the method being used. I've got a computer algebra package, so it takes half a second to calculate the coefficients of my last product. I just wanted to make sure I wouldn't miss any terms by truncating the product prematurely. •• Fly by Night (talk) 15:12, 16 June 2010 (UTC)
- However, I think the more efficient way is the one described in the "Intermediate function" section. -- Meni Rosenfeld (talk) 15:06, 16 June 2010 (UTC)
Partitions of n with a conditions
[edit]How would I work out the partitions of n given the assumption that there are exactly six numerically distinct summands and each of the summands is less than a given number? The motivation for this question is the lottery. We have balls numbered from 1 through to 49, and six balls are drawn at random from the 49. I would like to know how many different draws would give balls whose face values sum to a given number. Clearly there is only one combination that sums to 21, namely 1 + 2 + 3 + 4 + 5 + 6 = 21. Similarly, there is only one combination that sums to 279, namely 44 + 45 + 46 + 47 + 48 + 49 = 279. So, given 21 ≤ n ≤ 279 how many partitions of the form n = a1 + a2 + a3 + a4 + a5 + a6 are there given that 1 ≤ ak ≤ 49 and ak ≠ al if k ≠ l? •• Fly by Night (talk) 14:14, 16 June 2010 (UTC)
- Well, you can get a simple upper bound by considering the sum (eg X) divided into n sections. We have n-1 "separators", so the supremum would be (choosing n-1 elements from a list of X+n-1 objects). However, this does allow repeated values, so is not the least upper bound. -mattbuck (Talk) 14:43, 16 June 2010 (UTC)
- Thanks, but I'm looking for an explicit value; not a bound. One key premise was that the values can't repeat. •• Fly by Night (talk) 15:05, 16 June 2010 (UTC)
- There's simple solutions with either condition relaxed, either allowing repeats or no limit like 49, but I don't know an easy solution of your problem. For something like this just running a straightforward program might be best. You'd need a compiled language like C to get the answers quickly rather than using Basic if it is done straightforwardly way with these numbers. So if you're not into programming a bit of help might be a good idea. Dmcq (talk) 15:41, 16 June 2010 (UTC)
- [ec] Just getting the obvious out of the way: For sufficiently small numbers, you can brute-force it. For 6 and 49, it took my naive implementation less than a minute to get results for every n. -- Meni Rosenfeld (talk) 15:45, 16 June 2010 (UTC)
I thought about using a program, but there are far too many cases to check. I'll give it a go, but I think my computer will just freeze. I'll report back ASAP. •• Fly by Night (talk) 17:03, 16 June 2010 (UTC)
- Make sure that your variables are ascending, that is, each starts at a value one more than the previous. This gives you 20 million cases (each being a simple addition) which is well within the abilities of even a weak computer in an unoptimized programming environment, and will give you results for every n.
- Also, if you're content with an approximate solution, you don't have to enumerate all possibilities - you can choose the variables randomly, and repeat enough times to get the required error bounds. -- Meni Rosenfeld (talk) 18:32, 16 June 2010 (UTC)
- Here's a naive C# implementation (that is probably suitable for similar languages) that took 30 milliseconds on my machine. I've tried optimizing it a bit but that didn't have a practical effect.
int[] results = new int[280]; for (int x1 = 1; x1 < 50; x1++) { for (int x2 = x1 + 1; x2 < 50; x2++) { for (int x3 = x2 + 1; x3 < 50; x3++) { for (int x4 = x3 + 1; x4 < 50; x4++) { for (int x5 = x4 + 1; x5 < 50; x5++) { for (int x6 = x5 + 1; x6 < 50; x6++) { results[x1 + x2 + x3 + x4 + x5 + x6]++; } } } } } }
Why not simply use generating functions? Evaluating
is quite easy. This is without the restrictiong that the numbers are smaller than 50, but with that restriction you can still exactly evaluate the summation and extract the answers using series expansions. Count Iblis (talk) 17:52, 16 June 2010 (UTC)
This is what I get when I work out the correct generating function, taking into account the restriction that the numbers be strictly increasing and not be larger than N, using Mathematica
Putting N = 49 and expanding to order 279 gives:
- How much time did computing the expansion take? -- Meni Rosenfeld (talk) 19:42, 16 June 2010 (UTC)
- I asked Mathematica this by evaluating Timing[Series[%,{x,0,279}]] and it says 2.69 Seconds. Not bad for my antique 400 MHZ processor :). Also, generating the generating function is very easy, you just iterate this process. Sum the geometric series x^k from k = r to N. Then replace r by k + 1, multiply the result by x^k and then sum again from k = r to N. At this stage you have summed over n6 and n5, so you have to repeat this process until you have summed over n1. At the last step you, of course, set r = 1. Count Iblis (talk) 21:05, 16 June 2010 (UTC)
I was using Maple, and have 2 GB of RAM and a Pentium Dual Core at 2.16 GHz and 2.17 GHz and I stopped the procedure after two and a half hours and 6 MB use of memory. Although I was just using Boolean commands: for a from 1 to 49 do etc. I don't have the faintest idea how to use C, C+, C#, or anything link that. •• Fly by Night (talk) 21:40, 16 June 2010 (UTC)
- It's amazing what you can do with generating functions, I'm impressed. Anyway you've got the complete solution in one rather long line there. Dmcq (talk) 22:28, 16 June 2010 (UTC)
- It really shouldn't have taken that long. Can you post the input you gave to Maple? -- Meni Rosenfeld (talk) 06:38, 17 June 2010 (UTC)
- This shorter line is sufficient to show the point that the coefficient to is 1 and that the coefficient to is also 1
- Bo Jacoby (talk) 06:44, 17 June 2010 (UTC).
- This shorter line is sufficient to show the point that the coefficient to is 1 and that the coefficient to is also 1
My Maple input, well, it's a little embarrassing. I did this:
Total := 0: for a from 1 to 49 do for b from 2 to 49 do for c from 3 to 49 do for d from 4 to 49 do for e from 5 to 49 do for f from 6 to 49 do if a + b + c + d + e + f = 100 and a < b and b < c and c < d and d < e and e < f then Total := Total + 1: fi: od: od: od: od: od: od:
•• Fly by Night (talk) 23:07, 17 June 2010 (UTC)
- The innermost expression here is computed approximately N6 times and is complicated compared to Meni Rosenfeld's which is computed about N6/6! times and is quite simple. If the expression takes 5 times longer then whole program would take about 3600 time longer - a second becomes an hour. So if his naive implementation was in Maple too on a comparable machine this algorithm would take two and a half days. Comparing to the C# taking 30 milliseconds if it was coded using this algorithm would take two minutes. It looks to me like the Maple is 1000times slower. That seems a bit high to me, I'd have though they'd have done a bit more optimisation for speed, but it wouldn't be wholly unreasonable. Dmcq (talk) 07:57, 18 June 2010 (UTC)
- For the record, my 1-minute implementation was in Mathematica. Yeah, I was also surprised how slow it was compared to C# (the two implementations were conceptually the same). -- Meni Rosenfeld (talk) 11:33, 18 June 2010 (UTC)
- As I suspected, you didn't use my advice to start each variable at a value one higher than the last. As Dmcq says this requires x720 cases and makes each longer. You should have done
Total := 0: for a from 1 to 49 do for b from a+1 to 49 do for c from b+1 to 49 do for d from c+1 to 49 do for e from d+1 to 49 do for f from e+1 to 49 do if a + b + c + d + e + f = 100 then Total := Total + 1: fi: od: od: od: od: od: od:
- Also, this will give you the result for just one value of n. You can get them all with same effort by defining an array of values for each n, and each time incrementing the array member in the index of the sum.
- Anyway, I remind you that if you just want the results, use Count Iblis' polynomial. -- Meni Rosenfeld (talk) 11:33, 18 June 2010 (UTC)
- Fastest access to the values would be to preload an array and get value from that in a language like C# or even assembler. Slowest is to use an interpreted language and an unoptimised algorithm for each value. Here the difference in speed would be about 100,000,000,000,000 to 1. It just shows how easy it is to lose a few factors of a thousand and not notice. You'd never have done that with an old visible record computer! Dmcq (talk) 13:13, 18 June 2010 (UTC)
- re: "As Dmcq says this requires x720 cases ..."; note that Dmcq wrote, "about N6/6! times". 496 / 49C6 ≃ 990. -- 58.147.53.173 (talk) 15:17, 18 June 2010 (UTC)
- Dmcq said "approximately N6" vs. "about N6/6!", which is x720. In fact, two wrongs make a right here, as it's really vs. , which is still x720. -- Meni Rosenfeld (talk) 18:42, 19 June 2010 (UTC)
- Here is a perl one liner which uses recursion both for obfuscation and for ease of changing the number of sumands without adding additional loop variables:
- perl -le 'sub f{if(@_<6){f(@_,$_)for($_[-1]+1..49)}else{$i=0;$i+=$_ for@_;++$s[$i]}}f();print"$_ $s[$_]"for(1..$#s)'
- Runtime (98 seconds on my netbook) is about 4 times longer than with the nested variables.
- re: "As Dmcq says this requires x720 cases ..."; note that Dmcq wrote, "about N6/6! times". 496 / 49C6 ≃ 990. -- 58.147.53.173 (talk) 15:17, 18 June 2010 (UTC)
- An improvement to the nested loop approach would be for a to range from 1 to 44, b from a+1 to 45, ..., f from e+1 to 49.→81.147.3.245 (talk) 11:38, 19 June 2010 (UTC)
Wolfram Alpha
[edit]Can I get Wolfram Alpha to calculate the area of a quadrilateral given its vertex co-ordinates? I can't seem to get it to compute using the syntax "quadrilateral vertex coordinates {(x,y),(x,y),(x,y),(x,y)}
Thanks in advance,
PerfectProposal 14:47, 16 June 2010 (UTC)
- Works for me. You can also start with "area of a" to remove the unneeded stuff. -- Meni Rosenfeld (talk) 15:16, 16 June 2010 (UTC)
- If your polygon is a quadrilateral, you may well do that yourself with a simple calculator... For the solution see article section Polygon#Area and centroid (formula for A). --CiaPan (talk) 18:44, 18 June 2010 (UTC)