Wikipedia:Reference desk/Archives/Mathematics/2009 February 9
Mathematics desk | ||
---|---|---|
< February 8 | << Jan | February | Mar >> | February 10 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
February 9
[edit]Solenoidal Fields, Curl and Div
[edit]Sorry to bombard the forum with vector calc questions today but I'm ill and stubbornly refusing to give up on my work which seems to be a bad combination, I'd really appreciate it if someone could just point me in the right direction to start this question off:
Consider .
I'm meant to show that curl(A)=B if div(B)=0 everywhere - as far as I'm aware from my reading, this would make B a solenoidal field, with a vector potential of A, right? I'm not sure how I make the jump (if that's true) from the integral to though - can anyone give me a little help in the right direction? I've been told to use - but I'm unaware as to how that helps me, sigh!
Thanks very much for the help, Spamalert101 (talk) 00:02, 9 February 2009 (UTC)Spamalert
Right, I've been working on this a little more and figured I'd write up what I've got so far in the hopes someone might be able to give me a hint or point out a mistake:
(subbing in X=xt)
(I think you can take the curl under the integral - is this not doable? Would be nice to know why not if so)
(Using the expansion for and )
(since I hope?)
- and then could I integrate this by parts? What next?
I'm not sure whether this was the right direction but it was the only thing I could see to do... Thanks for any help, Spamalert101 (talk) 05:36, 10 February 2009 (UTC)Spamalert101
Master Theorem
[edit]Regarding the master theorem, for case 2, ie where , then the recurrence is , does that logarithm have a base? Copysan (talk) 04:36, 9 February 2009 (UTC)
- The base doesn't matter in that case. It is big O notation so multiplying by a constant factor can be ignored. Changing logarithm base corresponds to multiplying by a constant. PrimeHunter (talk) 04:54, 9 February 2009 (UTC)
is there a proof that no "betting system" can affect the expected return
[edit]Is there a mathematical proof that no "betting system" could offset losses in a memoryless game offering percentages all in the favor of the house? —Preceding unsigned comment added by 82.120.236.246 (talk) 11:11, 9 February 2009 (UTC)
- Yes. The expectation of the sum of random variables is the sum of the expectations, and a sum of negative numbers is negative. Since it is memoryless there is nothing you can do to increase the future expectations using knowledge of past results, so the expectations will always be negative. --Tango (talk) 11:29, 9 February 2009 (UTC)
- Sorry, this is not rigorous enough for me, since it does not address the reason "betting systems" cannot leverage betting differences: betting systems work by increasing and decreasing the size of bets in response to winning and losing streaks. Intuitively, this should not be possible if the game is memoryless. But that is not a proof. (Your argument is good if you don't have a chance to affect the size of the bet). Is there a proof that leveraging can in no way effect the expected return?
- By the way I have a proof that it is possible to make any amount of money if you have an infinite payroll, without affecting your infinite bankroll in any way, and I want to know if it is mathematically sound. Let's say you want to make $1 billion dollars, and have an infinite bankroll to help you do it (there will be no net effect on this bankroll). My method is to open a bank account (for the winnings from this method) write yourself a check for $5 billion drawn on your infinite payroll, and deposit it. When the check clears, your infinite bankroll will not be affected in any way but your new bank account will be $5b richer for it. Is my reasoning true and correct? Thank you! —Preceding unsigned comment added by 82.120.236.246 (talk) 12:02, 9 February 2009 (UTC)
- Yes, but why would you bother giving yourself five billion dollars if you already had an infinite bankroll? Algebraist 13:02, 9 February 2009 (UTC)
- Because you don't want to keep using your infinite bankroll? Or you are just using someone else's, and don't want them to notice on their balance (which should remain infinite)? Honestly, I don't know: I just know that a lot of sites actually say that the martingale betting system is great, because it works with probability 1 if you have an infinite bankroll! So, I wonder if my system would be a good response to these people, since it doesn't even involve handling bets, etc. So it's simpler. Any thoughts? —Preceding unsigned comment added by 82.120.236.246 (talk) 13:17, 9 February 2009 (UTC)
- Yes, if someone still believes in the Martingale system even after it has been pointed out that you need an infinite bankroll for it to work, then I guess your example might make its flaw more obvious. —JAO • T • C 13:38, 9 February 2009 (UTC)
- Systems like the one you describe require a very long time to recoup losses and so ignore an important cost -- the time value of money. For example, suppose prevailing interest rates are 5%. When you withdraw the $1 billion, you immediately begin incurring an opportunity cost of $5,700 per hour (the interest you would have earned had you not withdrawn the money). So, your system has to generate an income of $5,700 per hour *in addition* to the income it must generate to overcome gambling losses. Wikiant (talk) 13:26, 9 February 2009 (UTC)
- Assuming that it's possible to hold an infinite amount of money in an account, and assuming that the bank still promises a 5% interest, I don't see any loss here. If you had in the account, you'll now get each hour, but that's still . —JAO • T • C 13:57, 9 February 2009 (UTC)
- Because you don't want to keep using your infinite bankroll? Or you are just using someone else's, and don't want them to notice on their balance (which should remain infinite)? Honestly, I don't know: I just know that a lot of sites actually say that the martingale betting system is great, because it works with probability 1 if you have an infinite bankroll! So, I wonder if my system would be a good response to these people, since it doesn't even involve handling bets, etc. So it's simpler. Any thoughts? —Preceding unsigned comment added by 82.120.236.246 (talk) 13:17, 9 February 2009 (UTC)
- What you are saying is that if you have an infinite amount of money you can spend a finite amount of money and still have the same amount of money left. That's basically the definition of infinity. As for my proof, I think it is rigorous. Increasing the size of your bet just multiplies the expectation by a positive number, a negative expectation times a positive number is still negative. You bet more, you just lose more. The Martingale system is well known and only works if you have an infinite bankroll, if you don't (and if we're talking about the real world, you don't) then your expectation is still negative (and actually works out to exactly the same expectation as just betting your whole bankroll in one go, I believe). --Tango (talk) 13:41, 9 February 2009 (UTC)
- Thank you, what you just added ("Increasing the size of your bet just multiplies the expectation by a positive number, a negative expectation times a positive number is still negative. You bet more, you just lose more.") is what I was looking for. I guess I'm not used to thinking rigorously, so that I could request your addition but not come up with it myself. This answers my question.Resolved
- The fact that the expectation is negative isn't quite enough, though. It doesn't exclude the possibility that some betting scheme could allow you to win large amounts of money with high probability, it just means there would have to be a concomitant small chance of losing very very large amounts of money, as in the St. Petersburg paradox. Algebraist 13:49, 9 February 2009 (UTC)
- True, and if you factor in the diminishing marginal utility of money you could theoretically end up with a positive expectation from something like that. Is there a proof that such a system is impossible? --Tango (talk) 14:01, 9 February 2009 (UTC)
- Well, that's pretty much what the martingale system gives you: if you're willing to risk a huge enough bankroll, you can win as much money as you want with a probability as close to 1 as you want. I think you can get some kind of useful theorem if you impose a condition equivalent to the fact that casinos have maximum bets, but that's based on a vague memory of a book I glanced at several years ago. Algebraist 14:08, 9 February 2009 (UTC)
- Martingale only allows you to win the size of your initial bet. I guess you can make that arbitrarily large but it requires an even larger bankroll to get your probability of winning to whatever level you require (it rapidly increases to unrealistic levels - if you want to win $1 betting on red on a double-zero roulette table with a probability of 99% you need $128 [assuming I can use a calculator], that obviously scales with an increased desired win, if you want 99.9% it becomes $1024, and so on). If you factor in the house limit that puts a limit on how many times you can bet - it should be pretty easy to calculate your expectations. Incidentally, I don't think what I said about diminishing marginal utility of money applies, at least not directly, the fact that you can declare bankruptcy and limit your losses could give you a positive expectation, though. --Tango (talk) 14:28, 9 February 2009 (UTC)
- The result I was trying to remember is the Optional stopping theorem. It says that a gambler in a fair casino with a finite lifespan and a house limit will, on average, leave with as much money as he came in, regardless of his strategy. It still doesn't say anything that isn't about expectation, though. Algebraist 01:25, 10 February 2009 (UTC)
- If memory serves, that theorem can be extended to unfair casinos as well - your expected bankroll when you leave is the same as your expected bankroll had you just bet everything on the first spin/toss/deal/whatever. I don't remember the proof, if I ever saw one, though. --Tango (talk) 01:33, 10 February 2009 (UTC)
- The result I was trying to remember is the Optional stopping theorem. It says that a gambler in a fair casino with a finite lifespan and a house limit will, on average, leave with as much money as he came in, regardless of his strategy. It still doesn't say anything that isn't about expectation, though. Algebraist 01:25, 10 February 2009 (UTC)
- Martingale only allows you to win the size of your initial bet. I guess you can make that arbitrarily large but it requires an even larger bankroll to get your probability of winning to whatever level you require (it rapidly increases to unrealistic levels - if you want to win $1 betting on red on a double-zero roulette table with a probability of 99% you need $128 [assuming I can use a calculator], that obviously scales with an increased desired win, if you want 99.9% it becomes $1024, and so on). If you factor in the house limit that puts a limit on how many times you can bet - it should be pretty easy to calculate your expectations. Incidentally, I don't think what I said about diminishing marginal utility of money applies, at least not directly, the fact that you can declare bankruptcy and limit your losses could give you a positive expectation, though. --Tango (talk) 14:28, 9 February 2009 (UTC)
- Well, that's pretty much what the martingale system gives you: if you're willing to risk a huge enough bankroll, you can win as much money as you want with a probability as close to 1 as you want. I think you can get some kind of useful theorem if you impose a condition equivalent to the fact that casinos have maximum bets, but that's based on a vague memory of a book I glanced at several years ago. Algebraist 14:08, 9 February 2009 (UTC)
- True, and if you factor in the diminishing marginal utility of money you could theoretically end up with a positive expectation from something like that. Is there a proof that such a system is impossible? --Tango (talk) 14:01, 9 February 2009 (UTC)
- Thank you, what you just added ("Increasing the size of your bet just multiplies the expectation by a positive number, a negative expectation times a positive number is still negative. You bet more, you just lose more.") is what I was looking for. I guess I'm not used to thinking rigorously, so that I could request your addition but not come up with it myself. This answers my question.
- Yes, but why would you bother giving yourself five billion dollars if you already had an infinite bankroll? Algebraist 13:02, 9 February 2009 (UTC)
- By the way I have a proof that it is possible to make any amount of money if you have an infinite payroll, without affecting your infinite bankroll in any way, and I want to know if it is mathematically sound. Let's say you want to make $1 billion dollars, and have an infinite bankroll to help you do it (there will be no net effect on this bankroll). My method is to open a bank account (for the winnings from this method) write yourself a check for $5 billion drawn on your infinite payroll, and deposit it. When the check clears, your infinite bankroll will not be affected in any way but your new bank account will be $5b richer for it. Is my reasoning true and correct? Thank you! —Preceding unsigned comment added by 82.120.236.246 (talk) 12:02, 9 February 2009 (UTC)
Given that set A = {1,2,{3,4}}, is 3 a member of A?
[edit]Is {3} also a subset of A? 3 itself is probably not a subset of A, right?
Sorry for asking such an elementary question, but Wikipedia isn't making this very clear, especially between elements and subsets. Could you update your articles? They are alarmingly sparse. 137.54.10.188 (talk) 20:15, 9 February 2009 (UTC)
- The notation 'A={B,C,D}' means that A is the set whose elements are B, C and D. The statement 'A is a subset of B' means that every element of A is also an element of B. Does that answer your questions? Algebraist 20:22, 9 February 2009 (UTC)
- Yes, but is {3} an element of the set {1,2,{3,4}}? That is, what about nested set membership?? Because I'm just trying to sort this out from the idea that {3} != 3. So is {1,2} a subset of {11,4,{1,2,3}}? Or {{11,4,{5,6,{1,2}}}? Or {{11,4,{5,6,{1,2,3}}}? And so if A = {{11,4,{5,6,{1,2,3}}}, is 3 an element in A? Please help! 137.54.10.188 (talk) 20:28, 9 February 2009 (UTC)
- The elements of {11,4,{1,2,3}} are 11, 4 and {1, 2, 3}. Thus 1 and 2 are not elements of {11,4,{1,2,3}}, so {1, 2} is not a subset. Algebraist 20:30, 9 February 2009 (UTC)
- I am slightly confused because I read somewhere that A can belong to B, and B can belong to C, but it's possible to have A not belong to C. But is {A} a subset of C? What's the difference between membership and subset inclusion in nested cases?? 137.54.10.188 (talk) 20:32, 9 February 2009 (UTC)
- Consider the set {{A}}. It has only one element, namely {A}. In turn, {A} has only one element, namely A. Thus A is not an element of {{A}}, and so {A} is not a subset of {{A}} (since by the definition of subsethood, {A} is a subset of B exactly if A is an element of B). Thus {A} is an element of {{A}} but not a subset. Algebraist 20:38, 9 February 2009 (UTC)
- But are 1 and 2 members of the set {11,4,{1,2,3}}? The thing is I'm trying to sort out BOTH set membership and subset inclusion simultaneously. 137.54.10.188 (talk) 20:34, 9 February 2009 (UTC)
- As I said above, the elements (aka members) of {11,4,{1,2,3}} are 11, 4 and {1,2,3}. That's just what the notation means. Are any of these 1 or 2? Algebraist 20:38, 9 February 2009 (UTC)
- I am slightly confused because I read somewhere that A can belong to B, and B can belong to C, but it's possible to have A not belong to C. But is {A} a subset of C? What's the difference between membership and subset inclusion in nested cases?? 137.54.10.188 (talk) 20:32, 9 February 2009 (UTC)
- The elements of {11,4,{1,2,3}} are 11, 4 and {1, 2, 3}. Thus 1 and 2 are not elements of {11,4,{1,2,3}}, so {1, 2} is not a subset. Algebraist 20:30, 9 February 2009 (UTC)
- Yes, but is {3} an element of the set {1,2,{3,4}}? That is, what about nested set membership?? Because I'm just trying to sort this out from the idea that {3} != 3. So is {1,2} a subset of {11,4,{1,2,3}}? Or {{11,4,{5,6,{1,2}}}? Or {{11,4,{5,6,{1,2,3}}}? And so if A = {{11,4,{5,6,{1,2,3}}}, is 3 an element in A? Please help! 137.54.10.188 (talk) 20:28, 9 February 2009 (UTC)
- However, in one way of looking at it, 3 is an element of {{0,1,2},{3,4}}. (This is sort of a pun.) --Trovatore (talk) 20:46, 9 February 2009 (UTC)
- True, but horrible. It's probably best to treat numbers as urelemente for the purposes of basic set theory of this sort. Algebraist 20:48, 9 February 2009 (UTC)
- Aha, so all we have to do is expand the definition of as , of which clearly is not an element… I think, but I can't really see because my eyes are watering. —Bromskloss (talk) 00:52, 14 February 2009 (UTC)
Divisibility
[edit]Here's an interesting problem I came across: If a^2 + b^2 is divisible by 3, prove that ab is divisible by 3. This actually isn't a homework problem. I've given it some thought, but I can't think of a good way to approach this. But I tried to come up with some numbers a and b which actually satisfy the first condition, and all the numbers I found where divisible by three. And if either a or b is divisible by 3, then ab will be divisible by 3. But I can't find any way to prove that either a or b must be divisible by 3. I've tried a bunch of algebraic manipulations, but none of them have gotten me anywhere. Could you help me? —Preceding unsigned comment added by 70.52.46.213 (talk) 23:33, 9 February 2009 (UTC)
- Our article on modular arithmetic gives background. Briefly, if , then your apparent options are or . You can readily check that the second option is not possible, and if that means . Ray (talk) 23:41, 9 February 2009 (UTC)
- A hint for that "readily check" (which could prove rather tricky if you've never seen anything like it before) - check which integers mod 3 are perfect squares. --Tango (talk) 01:37, 10 February 2009 (UTC)
- If you are not familiar with modular arithmetic then think of it like this: a must be of the form 3n or 3n+1 or 3n+2 where n is an integer. a^2 must be of the form 3m or 3m+1 or 3m+2, but can you say which of the three based on the form of a? Similar for b and finally for a^2+b^2. PrimeHunter (talk) 02:00, 10 February 2009 (UTC)
- In fact, we can go further and say that if a2 + b2 is divisible by 3 (with a and b integers) then ab is divisible by 9. Gandalf61 (talk) 07:06, 10 February 2009 (UTC)
- If you are not familiar with modular arithmetic then think of it like this: a must be of the form 3n or 3n+1 or 3n+2 where n is an integer. a^2 must be of the form 3m or 3m+1 or 3m+2, but can you say which of the three based on the form of a? Similar for b and finally for a^2+b^2. PrimeHunter (talk) 02:00, 10 February 2009 (UTC)
- A hint for that "readily check" (which could prove rather tricky if you've never seen anything like it before) - check which integers mod 3 are perfect squares. --Tango (talk) 01:37, 10 February 2009 (UTC)