Wikipedia:Reference desk/Archives/Mathematics/2010 December 7
Mathematics desk | ||
---|---|---|
< December 6 | << Nov | December | Jan >> | December 8 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
December 7
[edit]Wilson's theorem
[edit]Using the fact that Zp* is cyclic how do I establish Wilson's theorem? Thanks-Shahab (talk) 10:34, 7 December 2010 (UTC)
- The obvious way to prove Wilson's theorem is to pair up each element of Zp* with its multiplicative inverse. I don't see how cyclicity has anything to do with it, you only need the fact (valid in any integral domain) that ±1 are the only square roots of 1.—Emil J. 15:04, 7 December 2010 (UTC)
- You can use cyclicity to write as where is a generator, observe that , , and is an integer (assuming ), and reason about the square roots of 1. But I think Emil's idea is cleaner. —Bkell (talk) 16:37, 7 December 2010 (UTC)
Compression of Primes
[edit]I'm currently considering keeping the first however-many primes as some kind of optimal compression of the third differences. This would theoretically allow use of the prime() and pi() functions reasonably efficiently for large numbers. A few questions arise: 1) For future reference since my computer proper--I'm using a cellphone--is not online now or in the near future, is there a link available to command the functions from off-site for PARI/GP (ideally) or anything else? This would obviously be more efficient than what I have in mind. 2) Is there a more reasonable idea of how to indirectly compress the primes? Presumably this has been studied a little. 3) If not or if you're not sure, is there a best (or really good) way to compress these third differences that anybody can think of?
- I'm now considering what might be more reasonable and faster storage/retrieval method. By keeping a table of permissible residue classes modulo 19# and storing the primes according to the differences in (cycle of) indices of these classes, perhaps with a further compression for average shifts with size and a listing of anomalous jumps, one can reduce the individual primes to an average of three bits up to a very large number without incurring much more retrieval-time cost than about double the time of adding or subtracting the differences from the closest of however-many primes are stored explicitly. Does this alternative sound reasonable?Julzes (talk) 18:51, 7 December 2010 (UTC)
Incidentally, for those who remember me for my peculiar coincidences, earlier this year I discovered that the 4th and 44th primes that translate twice as primes from both base 2 and base 3 to base 10 are those which first translate once and twice as primes from base 4 to base 10 and also both have leading digits 234. Just yesterday I also found the tangentially connected oddity that the first 4 primes that translate 4 times as primes from base 4 to base 10 (excepting 2 & 3, of course) are 5, 29, 73, and 31193. Not only are the first three incredibly small, but (5*29*73)/31193=0.3393390 within accuracy, and the 5th iterate of 31193 has smallest factor 2393 (and the next factor is also nice: 607063) and the next number in the sequence after 31193 is 43093.Julzes (talk) 16:53, 7 December 2010 (UTC)
- I have no answers about compressing primes. As for web interfaces to powerful computational engines, I'm sure you're familiar with Wolfram Alpha. There's also a web interface to Sage (free registration required). Sage can accept PARI/GP code natively. Staecker (talk) 23:59, 7 December 2010 (UTC)
Sagarin rankings + possibilities
[edit]First off, apologies for the ridiculously long post, but if I'm going to work through it, I have to add in all the material! Anyway...
- The background
Jeff Sagarin posts weekly rankings of several sports, including the NFL: [1]. I particularly like to look at the "Pure points" column on his rankings, as it's considerably more accurate than just looking at win/loss records.
Last week, one of the eight divisions, the NFC West had the ignoble distinction of having all four of the worst 4 teams in its division (or at least I thought it did... read on further). Each division is composed of only 4 teams, so the chances of this occurring at random are ridiculously low. Taking out unusual variables like "physical location of the team in the US contributing to a worse record", I calculate the likelihood of this happening in any given division as.
- First team: , because it can come from any division.
- Second team: ; there are only 31 teams left, three in the same division as the first team.
- Third and four: and respectively.
- Total, multiply all four factors, ~
However, upon reexamining the facts, I saw that I'd misread, and a team from another division (the Carolina Panthers) was actually in last place. So the division only had 4 out of the 5 worst teams. This considerably complicates things. So I'd like to recalculate for the statement, what are the chances that four out of the last five teams are all from the same conference? This includes the possibility that the last four are in the same conference. My feeble attempt below:
- The possibility that the last four are all the worst is above: 1/4495.
- The possibility that the last three are the worst, the next is from another conference, and the next after that from the first conference is: . The term came from the probability of the fourth to last team being nonconference. The 28s cancel each other out, and we get the same number as the first.
- And we continue on with the other three possibilities and get ~
- Now, finally, on to my questions
- Is the work I did above correct? I'm 99% sure it is. Apologies if the wording is unclear at points too, but I figure I'm dealing with mathematicians who will understand me anyway. :)
- 899 is divisible by 29 and 31. Is it a coincidence that these happen to be two of our factors? Or is there another equation which explains it?
- This week, the numbers read differently: four out of the last seven teams are in the NFC West. How can I recalculate for that? I don't want to physically go through each term again; I'm sure there's a factorial equation out there which is more efficient.
Thanks for your time. Magog the Ogre (talk) 20:19, 7 December 2010 (UTC)
- Your working is right as far as I can tell. If the order of teams the teams is completely random, then there are 5 choose 4 = 5 equally likely orders for the last five teams, given that some particular group of five teams have the five lowest positions. Therefore you only need to work out the probability of four teams from the same conference being in the lowest five (in any order), multiply it by the probability of a team from another conference taking the final position (this is equal to one, as a team from another conference must necessarily be in the fifth position - that's why you get the 28 terms cancelling, and why this initial value is equal to what you got in your first calculation), and multiply it by 5. It is not a coincidence that 899 = 29*31. You multiplied 29 and 31 in the denominator, they didn't cancel with anything else, so they stayed there. The 28 in the denominator cancelled with the 28 in the numerator, and 30 cancelled with 2, 3, and 5.
- The probability of "one conference being in the lowest x positions" should always be
- because the probability of the other (x-4) teams being any other combination of teams is always equal to 1 (because it must always be some combination of teams). So for 7, it is
- You should note, however, that this does not remotely reflect the actual probability of the NFC West all being in the lowest 7 ranked teams. A better calculation would involve prior probabilities or something like that. It is more likely that, rather than being particularly unlucky, the teams in the NFC West are not very good. --superioridad (discusión) 21:53, 7 December 2010 (UTC)
Ah, I get that. I knew it would have to be x choose y or x permutation y, but frankly, my college-level combinatorics teacher was bad at his job, so I didn't learn much. :)
As for your last statement, I wasn't saying they were unlucky, I was saying they're not good. However, it's unlucky that the teams that aren't good all happen to be in the same division. Magog the Ogre (talk) 22:15, 7 December 2010 (UTC)