Wikipedia:Reference desk/Archives/Mathematics/2009 November 8
Mathematics desk | ||
---|---|---|
< November 7 | << Oct | November | Dec >> | November 9 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
November 8
[edit]Equation Help
[edit]Hi-I need to solve this equation
a(dv/dy)=b+c(dv^2/dy^2) where a b and c are constants
I really don't know where to begin with this-I've tried using integrating factors, bur still can't do it.Cuban Cigar (talk) 03:17, 8 November 2009 (UTC)
- You wish to solve , or, if we re-write, . Observe that this is a homogeneous linear equation with constant coefficients: . This site should help you obtain the solution. Hope this helps. --PST 03:46, 8 November 2009 (UTC)
- Sorry PST, the equation is homogenous, but is not. Bo Jacoby (talk) 14:33, 8 November 2009 (UTC).
- Well the transformation v=z+by/a should cure that. Dmcq (talk) 18:06, 8 November 2009 (UTC)
I've always liked to use the Laplace transform for problems like this. You start with a differential equation and then turn it, via the Laplace transform, into an algebraic equation. You solve that algebraic equation and then turn the solution, via the inverse Laplace transform, into the solution of the original differential equation. It's a bit hard at first, but once you begin to recognise a few objects and learn a few techniques it'll all get much easier. I got (after a couple of not-quite-right solutions):
where v(0) and v'(0) are the initial conditions for the ODE. If you don't have initial conditions then replace them with arbitrary constants. ~~ Dr Dec (Talk) ~~ 19:15, 8 November 2009 (UTC)
- Sorry Dr Dec, this doesn't take the initial data v(0) and v'(0). --pma (talk) 22:40, 9 November 2009 (UTC)
- Huh? What "doesn't take the initial data v(0) and v'(0)"? ~~ Dr Dec (Talk) ~~ 19:42, 10 November 2009 (UTC)
- I'm just saying that the expression you wrote on the RHS, when computed for y=0, doesn't assume the value that you denoted v(0). Maybe there's a typo somewhere. --pma (talk) 20:03, 10 November 2009 (UTC)
- You're quite right! I missed a factor of c. The solution should have been:
- ~~ Dr Dec (Talk) ~~ 23:35, 11 November 2009 (UTC)
- You're quite right! I missed a factor of c. The solution should have been:
Someone was telling me that I could use Integrating factor for this; but I'm not given a value for the constants, so I'm not sure what the value of the discriminant would be. Does anyone know how to do it using integrating factor method?Cuban Cigar (talk) 06:06, 9 November 2009 (UTC) —Preceding unsigned comment added by Cuban Cigar (talk • contribs) 06:05, 9 November 2009 (UTC)
- Did you try using that substitution I gave above? The answer should be fairly obvious after doing that. You don't need integrating factors or anything like that. Dmcq (talk) 13:05, 9 November 2009 (UTC)
- Btw, ehm, what are z and y in your substitution? --pma (talk) 22:25, 9 November 2009 (UTC)
- I used y instead of x because that's what the original poster used. The w is the new variable instead of v. The solution below has a w that is my w differentiated unfortunately but putting something in instead of w' would be the next step. Dmcq (talk) 00:06, 10 November 2009 (UTC)
- Btw, ehm, what are z and y in your substitution? --pma (talk) 22:25, 9 November 2009 (UTC)
- OP: follow Dmcq's hint! Then, if you also want a bit of theory to approach similar problems, remember that the general solution of a non-homogeneous linear equation (here your differential equation cv" - av' + b = 0) is the sum of (I) the general solution of the associated homogeneous equation (here it is cv" - av' = 0) plus (II) a particular solution of the non-homogeneous equation. Now you have two sub-problems, though easier.
- (I) cv" - av' = 0 is really easy. Call v'=w. Then you want w'=(a/c)w. Write the well known general solution w and then get v as (general) antiderivative of w. You will have 2 arbitrary constants (as they call'em) for the general solution, as it has to be.
- (II) a particular solution for cv" - av' + b = 0. Although there is a general method to do that, often a particular solution is found at a glance looking for particular solutions of special smart form. Here the hint is: try a v linear, that is, of the form v(x)=kx for some suitable constant k. Plug it in the equation and choose the constant k that makes kx a solution.--pma (talk) 22:10, 9 November 2009 (UTC)
Ah yes I got it now thanks all for your help =)Cuban Cigar (talk) 10:35, 10 November 2009 (UTC)
An alternative method - Let , then and you have a first order ODE to solve, rather than a second order. Once you have found u, integrate to get v. Readro (talk) 11:03, 10 November 2009 (UTC)
The original equation should have been . If we're going to criticise posters for lack of precision/rigour (see the question above on the 10th terms of some sequences, for example), then let's use the right notation. Actually, I think that some of the real mathematicians here are far too critical of naive (in their terms) posters, making it unlikely that they'll come back. Ego trip, anyone?→86.132.164.242 (talk) 13:20, 10 November 2009 (UTC)
Hi, 86... Here we are mainly virtual mathematicians. Too critical: you're possibly right but (my) problem is I can't see any criticism towards the posters... could you explain better? The notation for derivative is v', why. And Dr Dec used the variable y just because the OP used y too in his/her first post. --pma (talk) 15:54, 10 November 2009 (UTC)no matter
- Why does the variable have to be x? Surely the labelling of a variable is unimportant. I could replace y with a heart and still have the same ODE:
- ~~ Dr Dec (Talk) ~~ 19:47, 10 November 2009 (UTC)
- even nicer! --84.221.208.19 (talk) 20:44, 10 November 2009 (UTC)
- IT MUST BE X!!! , um sorry, I've come across a lot of that recently and it must be rubbing off. The hearts are nice. Very mellow. :-) Dmcq (talk) 22:51, 11 November 2009 (UTC)
Which school are you?
[edit]I've been reading sections on mathematical logic on this page, and I wonder what is the current census. Be aware that I have no formal background in mathematical logic.
I tend to prefer Poincare's style, where mathematics is about imagining weird spaces and having fun, satisfying our intuition, instead of the "barren landscape of logicism". I'm not exactly a intuitionist. I don't have a preference for any particular school, and find right and wrong in every school.I'm guessing most people are platonist? This is not surprising, since when we are thinking we visualize pictures and pretend the objects in question really exist in some other dimension. Najor Melson (talk) 03:28, 8 November 2009 (UTC)
- I went through a maths degree without really hearing about these different schools. I've only heard of them on Wikipedia. I think most mathematicians don't care about them, they just get on with it. It's only logicians that get involved with this kind of thing and they are just one small group of mathematicians. --Tango (talk) 03:56, 8 November 2009 (UTC)
- Although I agree with the above comment, I feel that its truth is a rather unfortunate fact of life. Firstly, logic and axiomatic set theory are interesting in their own right - mathematicians who claim that "they are unimportant" (these mathematicians are around, unfortunately), are ignorant people who do not appreciate the work of others. Furthermore, I am sure that it is beyond the capacity of some of these mathematicians to understand these branches (logic and intuition are different sorts of thinking in some respects). Secondly, logic is a fundamental aspect of mathematics - it can be used to not only clarify intuition, but also to develop insights which cannot be determined by mere intuition. Succintly, as mathematicians, we should respect the work of others even if we feel that it is "not worthwhile"; there are some snobs out there who do not know anything about axiomatic set theory/logic, but still claim that it is "not important". That said, Tango is right that there are few logicians around; my point is that it does not make their work less important to that of other mathematicians. --PST 06:48, 8 November 2009 (UTC)
Yes, I agree with PST. Logic is the far opposite of intuition; I used to have a BIG problem with Cantor's concept of cardinality, but now it's like my left hand. I find intuition fails badly especially in topology (or maybe that's because topology is the only field I'm just familiar with, I'm a learner not a professor), and many times reasoning with pure intuition leads to contradictions, so like you said we need logic to develop our theory further. However, I believe to most logicians proofs should be absolutely formal, and to me that's not only 1000+ pages long but also a "barren landscape". I think they'll sacrifice everything to make sure mathematics is contradiction free. Strangely enough I was planning to be a string theorist, and still am, but somehow got into mathematics and it just won't let go! Najor Melson (talk) 12:18, 8 November 2009 (UTC)
- I don't know any logicians who, in practice, think that "proofs should be absolutely formal". It is obvious to everyone that that path is not open to human mathematicians. There are a lot who think in principle that all our informal proofs should be sort of blueprints for formal proofs, but that's a bit different. Then there's the automated theorem proving community, but again that's not really what we're talking about.
- In practice, logicians use intuition as much as anyone. Of course it's an intuition you develop; the intuitions that you have after learning a subject are different from the ones you have at the start. --Trovatore (talk) 07:40, 9 November 2009 (UTC)
May be the original question has more to do with philosophy of mathematics, rather than logic? (Igny (talk) 03:44, 10 November 2009 (UTC))
- I just happened to see a blog post by Philip Wadler about how computers have led to "computational thinking" as an important mode of thought. It links to several articles related to the topic. Certainly, programming and algorithms are endlessly fascinating to some people, and are the subject of plenty of intuition. The Curry-Howard correspondence explains how computer programs and mathematical proofs are in some sense the same thing. Every teenager who stays up all night getting their wikipedia bot to work is experiencing the thrill of proof and logic, even if they don't realize it. 69.228.171.150 (talk) 02:23, 11 November 2009 (UTC)
Generalization of Fermat's little theorem
[edit]What is the generalization of Fermat's little theorem for finite fields as referenced here? Thanks--Shahab (talk) 06:27, 8 November 2009 (UTC)
- They mention it here Finite_field#Analog_of_Fermat.27s_little_theorem. Rckrone (talk) 08:08, 8 November 2009 (UTC)
i dunno, but in my math movie Fermat unveils it with: "say hello to my leetle friend." —Preceding unsigned comment added by 92.230.70.105 (talk) 11:48, 8 November 2009 (UTC)
a trick to make a positive-expectation, but in actuality no risk of having to pay out, lotto?
[edit]The reason mathematicians don't like lotteries is that for every dollar you put in (buying tickets), your expected return is as little as 17 cents. Obviously for them, it would be no problem to play a lottery where for every dollar they put in, they would expect to get $100 dollars. As for me, the person who would love to provide this lottery, I would do it on the following terms: The price of the lottery is $1, the chances of winning are 1 in a quadrillion, and the payout is $100 quadrillion. Obviously the governments of the world, collectively, could insure this amount, which is 1000 trillion, although if anyone won they may have to pay it out over a few hundred years. However, they would have no problem taking this risk, as if every single person on Earth transferred all the cash they had into lottery tickets in this scheme, the risk of having to payout would still be completely negligible. If even such a negligible risk would be too substantial to the governments of the world (if in addition to cash, everyone were to also transfer EVERY asset, of any kind, into tickets, the risk of payout might reach 1:100), then they could increase the odds to 1 in 100 quadrillion, the payout it $10,000 quadrillion, and the time to payout to many hundreds of years or more. That would certainly leave them at peace that the money they would receive is "free". Now, I would be very happy to take a 1 in a quadrillion risk for far less than $1, but the problem is that I am not actually in the same position as all of the world's governments, collectively. I simply can't offer the reward. So, my question, is this scheme, to "rip off" autistic mathematicians, who have no sense of practicality but only mathematical definitions such as "expected return" -- in this case, by making my lotto tickets more attractive than any other investment they can make on those purely mathematical grounds --, is this scheme salvageable in some way? Is there some trick I can use to make my payout credible, while remaining far too unlikely for me to seriously worry about? I can't think of any tricks of this kind. Maybe there would be some way to take a very large payout, along with an astronomically small risk of this, and somehow divide it using time or some other method so that in the end the actual cash I have on hand is sufficient for the payout, while the risk of payout remains astronomically small, and at the same time, mathematically, the expected payout remains strongly positive ($100 for every $1 invested)? I am open to any practical solution, including perhaps signing contracts with all the collective governments on the world, who would take the risk of having to pay out over 10,000 years. Hell, it can be payout over 100,000 years, there are at least SOME mathematicians who would simply not see a practical objection to that, not having learned about utility and so on. As you can see this whole vein of thought is not completely serious, however I wonder if you have any thoughts on this matter wihch would make the thing salvageable. Thank you! 92.230.70.105 (talk) 11:29, 8 November 2009 (UTC)
- Actually I think this is a pretty good question to ask a mathematician: "Would you take the risk of having to pay out $100 trillion, with the chances of having to do so being 1 in a trillion, for $20 right now?" For me, that's a free $20 right there. I'm guessing strongly that most mathematicians have a different answer :) —Preceding unsigned comment added by 92.230.70.105 (talk) 11:41, 8 November 2009 (UTC)
- The logic, in general, holds, except for the main flaw that no-one, in whatever position, would play it. There are lots of metrics in which the system falls flat, most notably timeframe of return, and I doubt anyone would take that risk. They'd be better trying to fool less intelligent people, for a short while. Seeing other people win is a very important part of why people play lotteries. In any case, it's much more profitable for a government to run a standard one at take the money of that. (Or, run one to cover the costs of a specific project, perhaps an Olympic bid?) - Jarry1250 [Humorous? Discuss.] 11:45, 8 November 2009 (UTC)
- Yes, time-frame is a problem. Do you have any idea for a trick that could overcome this? 92.230.70.105 (talk) 13:04, 8 November 2009 (UTC)
- The huge payout occurs so late that you expect it to be uninteresting, but your eternal soul must pay for ever. See Faust. Bo Jacoby (talk) 14:53, 8 November 2009 (UTC).
- The opening sentance says that mathematicians don't like to play the lottery; that's not true. The reasoning implies that mathematicians don't like to gamble; that's not true either. If these kind of sweeping statements were true then most doctors (of the medical persuasion) would be a tee-total, non-smokers. ~~ Dr Dec (Talk) ~~ 15:04, 8 November 2009 (UTC)
- You seem to think I deduced my conclusion about mathematicians from first principles; not at all, it is my first-hand experience. 92.230.64.60 (talk) 18:09, 8 November 2009 (UTC)
- The original poster might or might noy know that some mathematicians are aware of subtler concepts than expected value, such as risk (and risk aversion), decision theory, prospect theory and so on, and that they are since the times of Daniel Bernoulli at the very least... Goochelaar (talk) 15:30, 8 November 2009 (UTC)
- I believe this strategy was already tried, only on a somewhat smaller scale. Somebody offered lottery tickets with a very small chance that one person might win a billion dollar prize. While they didn't have the billion dollars to cover the risk, the were able to convince an insurance company, Lloyd's of London, if memory serves, to cover the risk. StuRat (talk) 18:15, 8 November 2009 (UTC)
- The diminishing marginal utility of money is significant here. $100 quadrillion isn't really worth significantly more than $100 billion - you can't actually buy anything you couldn't buy with the lesser amount. That means if you work out the expected *utility* rather than the expected money, it is negative. That is why a sensible mathematician would not go for it. --Tango (talk) 22:19, 8 November 2009 (UTC)
- See St._Petersburg paradox#Expected utility theory. By increasing the pay off you can counter any utility argument. (Igny (talk) 03:31, 13 November 2009 (UTC))
- There are some pretty good articles about how to use the Kelly criterion to figure out how big a payout it takes to make buying a lottery ticket worthwhile, based on your already-available bankroll. Basically it's never worth your while unless you're already a millionaire. Try this one and it looks like a web search finds more. 69.228.171.150 (talk) 03:21, 9 November 2009 (UTC)
- That article suffers a major flaw - it assumes you won't share the jackpot. When there is a large jackpot a lot of people buy tickets which makes it very likely that the jackpot will be shared. You can't ignore the number of tickets sold in the calculation, even if you are only counting the jackpot and not the smaller prizes. Since the jackpot will probably be shared the expectation is probably still negative (we had this discussion not long ago and worked out you need on average 5 or 6 rollovers to get positive expectation, if memory serves - that wasn't a very precise calculation, though). With a negative expectation, the correct bet size is always zero (unless you can start your own lottery and thus bet negatively). --Tango (talk) 03:37, 9 November 2009 (UTC)
Vectors of the same magnitude
[edit]Let and be two elements in a normed vector space such that . Is there any nice Latin or Greek word for this condition, as a property of this pair of vectors? Isometric does not feel right (I want it to be a property of linear transformations, surfaces etc.). --Andreas Rejbrand (talk) 14:29, 8 November 2009 (UTC)
- But the term isometry already exists and it is a property of a linear transformation. I don't think that there will be a standard word for two vectors of the same magnitude; it's too simple of a property. The vectors u and v would be two radii of a ball of radius ||u|| = ||v||. ~~ Dr Dec (Talk) ~~ 14:56, 8 November 2009 (UTC)
- Yes, I know that, of course (that was what I meant - to me a "isometry" is a kind of linear transformation or a function between two surfaces). OK. --Andreas Rejbrand (talk) 15:06, 8 November 2009 (UTC)
- What exactly are you looking for: a name for two vectors which have the same length in a given space or the name for two vectors whose lengths are kept equal under a given linear transformation? (I don't think there's a special name for the former. As for the latter, well, u and v belong to the same eigenspace.) ~~ Dr Dec (Talk) ~~ 15:16, 8 November 2009 (UTC)
- Exactly what I wrote, i.e. two vectors with the same norm. --Andreas Rejbrand (talk) 15:24, 8 November 2009 (UTC)
- But you said that you "want it to be a property of linear transformations, ...", by "it" I assumed you meant ||u|| = ||v||. I've un-done your last edit. You shouldn't change previous comments once people have started to reply: everyone will get lost. Either restate your point or strike the mistakes and re-write using <s>strike</s> to yield
strikeand then leave a post to say that you've changed the post after people have replied. If someone has replied while you were writing a reply then leave a (ec) at the start of your post to mean edit conflict. It helps other people follow the thread days, weeks, or even months late. - ~~ Dr Dec (Talk) ~~ 15:38, 8 November 2009 (UTC)
- But you said that you "want it to be a property of linear transformations, ...", by "it" I assumed you meant ||u|| = ||v||. I've un-done your last edit. You shouldn't change previous comments once people have started to reply: everyone will get lost. Either restate your point or strike the mistakes and re-write using <s>strike</s> to yield
- I said "Isometric does not feel right (I want it to be a property of linear transformations, surfaces etc.)." and by that I mean that that the it would not feel right to call the vectors "isometric", because "isometric" already has a very fundamental meaning in linear algebra, as a isometric linear transformation is a distance-preserving linear transformation (with a determinant of absolute value 1). I am sorry that the text was misinterpreted. I actually am a university-level math teacher, and right now I am teaching linear algebra, so I do know the topic. I just wondered if anyone knew a nice word meaning "of the same length" when it comes to vectors in a normed vector space. --Andreas Rejbrand (talk) 15:49, 8 November 2009 (UTC)
- ...and two surfaces are said to be isometric if there is an isometry between them, which is the case iff they have the same first fundamental form... --Andreas Rejbrand (talk) 15:58, 8 November 2009 (UTC)
- The parenthesis "(I want it..." is before the period of the sentence "Isometric does not...", and this supports my intended interpretation. Also, my intended interpretation makes sense. (Indeed, the word "isometric" is already defined for linear transformations and surfaces.) The other interpretation assumes that I do not know anything about math, and you should not assume that... --Andreas Rejbrand (talk) 16:00, 8 November 2009 (UTC)
- Also I am a sysop at the Swedish Wikipedia, so I know how to edit discussion pages. --Andreas Rejbrand (talk) 15:50, 8 November 2009 (UTC)
- OK, it seems like I overreacted a bit here. I am quite stressed today. --Andreas Rejbrand (talk) 16:19, 8 November 2009 (UTC)
- I never heard of a term for that. Possibly beacuse saying "u and v have the same norm" is usually accepted as reasonably short. If for some reason you need to use it many times, and prefer not to express it by the formula , there shoud be an expression borrowed from elementary geometry that should work. (I'm not sure: how does one say when two line-segments have the same length? Congruent, equivalent, isometric? In principle isometric is not wrong, but it may be confusing as you and Dr Dec remarked). --pma (talk) 15:51, 8 November 2009 (UTC)
- Congruent seems to be correct for line segments of the same length. Use IE or Firefox and take a look at this page. ~~ Dr Dec (Talk) ~~ 17:25, 8 November 2009 (UTC)
- I've never heard of a word, but if I were to invent one I'd go with something like "cometric". "iso-" usually mean invariant (which isn't what we want), "co-" (or "con-") means "same" ("concentric" means "same centre", "coincident" means going through the same point, "congruent" means "same shape", etc.). So, "cometric" would mean "same length". --Tango (talk) 18:57, 8 November 2009 (UTC)
- Well indeed the Greek ἴσος means equal, and in this exact meaning is used in all compounds including scientific ones. The prefix co- is from the Latin (cum= with ) and refers more to being associated with / joined together with, rather than having the same quality (think of cosine, cofactor, complement but also the ones you quoted). "Cometric" sounds like a mixed Latin-Greek variant of simmetric (σύν=cum). --pma (talk) 19:29, 8 November 2009 (UTC). Why not then the all-Latin version commensurable ops it already exists ;-) --pma (talk) 19:36, 8 November 2009 (UTC)
Don't know of a standard term either; "equal length" vectors is the shortest description I can think of. Abecedare (talk) 19:44, 8 November 2009 (UTC)
- The term I have seen is "equipollent", but it is not in common use as, of course, "equal length" is a more readily understandable term for the concept. 71.182.248.124 (talk) 18:50, 9 November 2009 (UTC)
- Yes! Should I have to choose a one-word term for ||u||=||v||, this is definitely the best of all, as it refers to magnitude or intensity in general and not only to length.--pma (talk) 08:34, 10 November 2009 (UTC)
Pythagorean Theorem
[edit]Why are there so many proofs of the Pythagorean Theorem? Most other theorems only have one or two distinct proofs, yet there are several hundred of the Pythagorean Theorem? --75.39.192.162 (talk) 18:40, 8 November 2009 (UTC)
- It is a very old, very simple theorem that everyone knows. That's probably the reason. It could also be something inherent about the theorem - it is so fundamental to mathematics that it crops up all over the place and can thus be proven using the tools of various different fields. One addition reason - it has so many proofs already that people think it is fun to come up with more. --Tango (talk) 18:49, 8 November 2009 (UTC)
Proof theory
[edit]As another question related to the last. I remember hearing about proof theory or something like that (I think it was at the 2007 BMC in Swansea). It's basically the metamathematics of proof. I've got a few questions to do with that:
- Is there a way to formally say that one proof is different to another?
- If so, then is there a way to assign a cardinality to the number of proofs of any given theorem?
- Finally, pushing the boat right out: is there a way of assigning a metric to the space of proofs of a given theorem?
~~ Dr Dec (Talk) ~~ 19:49, 8 November 2009 (UTC)
- If we interpret the second question as talking about known proofs, then the answers to all three questions are "yes". The real question is whether there is some natural/useful way to do it. I could say that two proofs are different if the MD5 hashes of their TeX code are different and use the Hamming distance on the MD5 hashes to define a metric. It would be a completely useless way to do it, though. If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result, but I'm sceptical that it can be done. --Tango (talk) 20:19, 8 November 2009 (UTC)
- No, my question was deeper than this. A proof should be language invariant, so writing it in Spanish or in English should give the same proof (from a proof theory point of view). Clearly the LaTeX code would be quite different. It's more about the methodology, the mathematical machinery, and the like. Not just the prima facie appearance of the proof. I think that 69.228.171.150's post is closer to what I had in mind. ~~ Dr Dec (Talk) ~~ 22:15, 8 November 2009 (UTC)
- I understood that. I was giving an extreme example to demonstrate how your question was flawed. What you want to know is if it is possible (it certainly is and I gave an example) but if it can be done in some natural and interesting way. --Tango (talk) 02:17, 9 November 2009 (UTC)
- No, my question was deeper than this. A proof should be language invariant, so writing it in Spanish or in English should give the same proof (from a proof theory point of view). Clearly the LaTeX code would be quite different. It's more about the methodology, the mathematical machinery, and the like. Not just the prima facie appearance of the proof. I think that 69.228.171.150's post is closer to what I had in mind. ~~ Dr Dec (Talk) ~~ 22:15, 8 November 2009 (UTC)
The question of proof identity is a vigorous topic in proof theory and there is ongoing effort to clarify the concept, but there is no satisfactory definition right now. This paper describes some of the issues. You might look at this site for some references and info. We have a related stub article, deep inference. 69.228.171.150 (talk) 22:04, 8 November 2009 (UTC)
- Great, thanks! I'll try to make some headway reading your links. Thanks again. ~~ Dr Dec (Talk) ~~ 22:23, 8 November 2009 (UTC)
- Can someone explain this to me: Tango said "If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result". Is this the same as saying we're trying to prove certain properties of proofs themselves, like as in a meta-theory? If so will there ever be an end to the succession of meta-theories? I find the apparent circularity very confusing, seeing as proof theory is supposed to prove things about metric spaces and we're now trying to construct a metric space out of proofs. Money is tight (talk) 01:51, 9 November 2009 (UTC)
- Yes, we're talking about metamathematics. I guess you could study metametamathematics if you wanted to, but I expect it would be classified as just part of metamathematics. For the most part, it would be indistinguishable from metamathematics because metamathematics is all about using mathematical techniques to study maths itself. --Tango (talk) 02:17, 9 November 2009 (UTC)
- Proof theory normally doesn't prove things about metric spaces. It proves things about proofs. Gödel's incompleteness theorems are probably the most famous theorems of proof theory. Of course, since (according to proof theory) proofs are mathematical objects, it's reasonable to want to assign a metric topology to them, as Declan was asking, so you can tell if two proofs are the same/almost the same/etc. As the link I gave shows, proof identity is an active research area, but complicated. 69.228.171.150 (talk) 04:25, 9 November 2009 (UTC)
- Can someone explain this to me: Tango said "If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result". Is this the same as saying we're trying to prove certain properties of proofs themselves, like as in a meta-theory? If so will there ever be an end to the succession of meta-theories? I find the apparent circularity very confusing, seeing as proof theory is supposed to prove things about metric spaces and we're now trying to construct a metric space out of proofs. Money is tight (talk) 01:51, 9 November 2009 (UTC)
Declan, you might also be interested in Hilbert's 24th problem (also mentioned in the Straßburger link above). 69.228.171.150 (talk) 04:32, 9 November 2009 (UTC)
- Excellent. I'll take a look at this in the morning. Thanks for all of your pointers; I've found them most enlightening. ~~ Dr Dec (Talk) ~~ 22:35, 9 November 2009 (UTC)
- A notion of distance between two proofs A and B could be the minimal length of a proof of "A ⇒ B" plus the minimal length of a proof of "B ⇒ A". This is more or less what we often mean in common speach, when we say that two proofs are "close" to each other. --pma (talk) 04:32, 12 November 2009 (UTC)
Bad assumption ?
[edit]Given the assumption that an event has a 1% probability, but the measured results show that this event occurred in 2 out of 3 trials, what is the probability that this assumption was correct ?. StuRat (talk) 20:07, 8 November 2009 (UTC)
- This problem is underdetermined, as we are not given the prior probability of the assumption being correct.
If we have thatIf we have a prior distribution on the probability then it's a simple application of Bayes' formula. - The likelihood of the assumption being correct - which is completely different, but may be of interest nonetheless - is .
- The beta distribution is also relevant for this kind of problem. -- Meni Rosenfeld (talk) 20:35, 8 November 2009 (UTC)
My advice is to ignore prior probability. Here is an explanation of it anyway: if you put a billion normal, fair coins and one double-headed (heads/heads, ie comes up heads 100% of the time) coin into a giant bag and then draw a coin at random and test to see it is fair, getting the result: "heads, heads, heads, heads, heads" (five in a row) then what is the probability you just tested a fair coin? Obviously the normal, reasonable response is to think that the odds are overwhelming that you picked one of the billion normal ones, even though the result you got with the (very probably) normal coin only appears 1/32 of the time with a fair coin. You have this reasonable response because because 1/32 is nothing compared with 1/1billion! In this case, the prior probability makes you quite sure that the coin is fair, even though without it, you would begin to suspect it is unfair. But still, this kind of devilry is irrelevant, and you should do what I do and ignore prior probability. 92.230.64.60 (talk) 21:03, 8 November 2009 (UTC)
- Why should one ignore prior probablity? Because William Feller was ignorant concerning it? Michael Hardy (talk) 05:12, 9 November 2009 (UTC)
- See Prosecutor's fallacy for the sort of problems that this reasong leads to. Taemyr (talk) 21:08, 8 November 2009 (UTC)
- (ec) Ignoring prior probability results in miscarriages of justice (really, courts have made that mistake and thrown innocent people in jail due to giving too much weight to DNA results, etc.). If you ignore the prior probability, you get the wrong answer. It is as simple as that. If you have spent years getting data that support the assumption that the probability is 1% and then you do three more tests that seem to contradict that assumption then you shouldn't reject the assumption. However, if you just guessed the probability based on no evidence then you should reject it as soon as there is evidence that contradicts it. --Tango (talk) 21:14, 8 November 2009 (UTC)
- 92 has given some great reasons why the prior distribution must not be ignored. But I believe his point was that with this kind of low-probability events, "side-channel attacks" become significant. If I pick up a coin, toss it 20 times, and obtain heads every time, I'll begin to suspect that whoever told me that the pile has a billion fair coins lied. Of course, that too can be encoded in the prior belief and updated in a Bayesian way. -- Meni Rosenfeld (talk) 08:18, 10 November 2009 (UTC)
Let me add some detail to this problem. We have a large number of items (say a million) and know that precisely 1% of the objects is of the desired type. We then draw 3 items, and 2 of the 3 turn out to be the desired item. We don't know if the draw was truly random or had some bias. How can we determine that ? It certainly seems likely that there is bias in the draw, but how do we quantify the chances ? StuRat (talk) 21:20, 8 November 2009 (UTC)
- We use hypothesis testing. We choose a null-hypothesis: "The draw was fair." We then choose a desired confidence level, say 95%. We then work out the probability of the event (or a more surprising one - it's important to include that, although it doesn't make much difference in this case) happening assuming that hypothesis: 3*0.01*0.01*0.99+0.01*0.01*0.01=0.000298. We then observe that 0.000298<1-0.95=0.05. That means we should reject the hypothesis and assume (until we get more evidence) that the draw was biased. In fact, it's worth noting that a single one of the desired type being drawn would be enough to reject the hypothesis in this case - the probability of at least one of them being draw (assuming the hypothesis) is about 3%. --Tango (talk) 21:34, 8 November 2009 (UTC)
- Thanks. Can we then express that as a probability ? Something like "the probability that this was a fair draw is _____" ? StuRat (talk) 22:01, 8 November 2009 (UTC)
- If we can, it is beyond my limited statistical training. --Tango (talk) 22:02, 8 November 2009 (UTC)
- Following on from Tango's analysis, our article on Type 1 and type 2 error might help, but I'm too tired to think straight at present. Dbfirs 22:24, 8 November 2009 (UTC)
- What about this? Surely that very much does assign probabilities that hypotheses are true, starting with prior probabilities?--Leon (talk) 08:54, 9 November 2009 (UTC)
- Excellent point. I did know that, I wasn't thinking straight! I've moved your comment down to keep things in chronological order. --Tango (talk) 22:41, 9 November 2009 (UTC)
- No, I'm afraid you were thinking completely straight (except perhaps for the part where you mentioned hypothesis testing, which is highly controversial). As I tried to explain, Bayesian inference relies on the availability of a prior distribution for the variables of interest. The choice of prior becomes less significant as more data is collected - and it can be sufficient to just choose a maximum entropy distribution - but in this case we only have 3 data points.
- Since StuRat avoided specifying his prior belief that the draw was fair - and the distribution of the bias if it exists - Bayes is inapplicable. He may be considering something simple like "the draw is fair with probability 0.5, and if it's not, it has a probability of p to come up with a special item, where p is distributed uniformly", in which case my calculations show that given the data, the probability of the draw being fair is 0.00118659. -- Meni Rosenfeld (talk) 08:10, 10 November 2009 (UTC)
- Hypothesis testing in controversial? So much for the stats modules I did at A-level... But I wasn't thinking straight - the answer I should have given way "No, we can't calculate that probability without knowing the prior probability. If you give us that, then we can do it." --Tango (talk) 17:59, 10 November 2009 (UTC)
- Excellent point. I did know that, I wasn't thinking straight! I've moved your comment down to keep things in chronological order. --Tango (talk) 22:41, 9 November 2009 (UTC)
- If we can, it is beyond my limited statistical training. --Tango (talk) 22:02, 8 November 2009 (UTC)
- Thanks. Can we then express that as a probability ? Something like "the probability that this was a fair draw is _____" ? StuRat (talk) 22:01, 8 November 2009 (UTC)
wolfram alpha
[edit]this doesn't look right to me, an extra factor of n' on the first term. and when show steps is clicked it just gets worse. http://www.wolframalpha.com/input/?i=d%2Fdx[(n%27)*exp(-x^2%2F2) . (link is nowikied as wikicode gets confused by adress). Is wolframm alpha often mistaken? —Preceding unsigned comment added by 129.67.39.44 (talk) 22:10, 8 November 2009 (UTC)
- It seems like Wolfram|Alpha does not like the prime. This question works: "differentiate (dn/dx)*exp(-x^2/2)" [1]. --Andreas Rejbrand (talk) 22:46, 8 November 2009 (UTC)
- Now I see that it actually works with prime as well, but you need to be explicit with the independent variable: "differentiate (n'(x))*exp(-x^2/2)" works [2]. --Andreas Rejbrand (talk) 22:47, 8 November 2009 (UTC)
- Wolfram Alpha has built-in commands and if you do something that it doesn't understand, it will guess at what you mean. So, it is important that you enter things correctly. StatisticsMan (talk) 14:29, 9 November 2009 (UTC)
Integral (in proof about theta function)
[edit]I am reading a proof in a book and it gives
and says "substitution followed by a shift of the path of integration". Doing the substitution gives
where . So, I guess the shift of the path of integration takes this path to the real line. But, I do not get why the two integrals would be equal. StatisticsMan (talk) 22:54, 8 November 2009 (UTC)
- I assume t is positive? Consider the paths from -R to R for large R rather than -∞ to ∞. The two paths form the top and bottom of a rectangle. The difference between the integrals along the top and bottom is equal to the difference between the integrals along the two short ends of the rectangle since the function (call it f) is entire. Out there |f(x)| becomes small and so you can show that those integrals along the sides of the rectangle are small, and go to zero in the limit as R goes to infinity. I hope that's clear since I don't think I explained it very well without a diagram. Rckrone (talk) 01:32, 9 November 2009 (UTC)
Equality of the integrals follows from Cauchy's integral theorem (look it up!) applied to the rectangle whose corners are at −R, R, R −iy/t, and −R −iy/t. The integral around the boundary must be zero. The integrals along the short vertical sides approaches 0 as R grows. Hence the integrals along the upper and lower boundaries have the same limit as R grows. Michael Hardy (talk) 05:22, 9 November 2009 (UTC)
- This makes sense. Thanks to both of you. I know Cauchy's integral theorem :). I just didn't think to consider a finite, very large R, though I should have because that is a very common trick. StatisticsMan (talk) 14:31, 9 November 2009 (UTC)