Wikipedia:Reference desk/Archives/Mathematics/2009 July 28
Mathematics desk | ||
---|---|---|
< July 27 | << Jun | July | Aug >> | July 29 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
July 28
[edit]Right or left continuity, convergent sequence definition
[edit]So, we have an equivalent definition for continuity (which I copied and pasted from Continuous function):
This can also be formulated in terms of sequences and limits: the function f is continuous at the point c if for every sequence (xn) in X with limit lim xn = c, we have lim f(xn) = f(c). Continuous functions transform limits into limits.
The question is, can we do a similar thing for right or left continuous only using sequences that are greater than or equal (for right) or less than or equal (for left) to c? StatisticsMan (talk) 14:16, 28 July 2009 (UTC)
- Yes, with the same proof. Algebraist 14:18, 28 July 2009 (UTC)
Graph
[edit]How do I graph y<4x ? —Preceding unsigned comment added by 71.244.44.98 (talk) 14:51, 28 July 2009 (UTC)
- Draw the line y=4x and then take any point not on the line, eg (1,0). If it satisfies the inequality (which (1,0) does as 0<4(1)) then shade the region containing the point (i.e. shade the whole side of the line where (1,0) lies.)--Shahab (talk) 14:56, 28 July 2009 (UTC)
this is a very vague response because of the word "region", which could be delimited by an axis or something. YOU SHOULD EITHER SHADE EVERYTHING ABOVE Y=4X OR EVERYTHING BELOW IT. I think you can figure out which :) —Preceding unsigned comment added by 82.234.207.120 (talk) 19:46, 28 July 2009 (UTC)
- also any points you use to graph them, since it isn't less than or equals to, aren't part of it, and neither is the line. You could indicate this by drawing a dotted line instead of a solid one, and small empty circles instead of normal points for any points you graph. —Preceding unsigned comment added by 82.234.207.120 (talk) 19:48, 28 July 2009 (UTC)
Orthogonal Arrays and Codes
[edit]Where can I get more information on Orthogonal Arrays and their relation to error correcting codes. I want to find a proof of the following theorem: The minimum distance d(C) of a binary linear code C and the (maximal) strength t(C') of C' (where C' is the dual code of C) are related by d(C)=t(C')+1. If someone can find me the proof online I would be much obliged. Thanks--Shahab (talk) 14:52, 28 July 2009 (UTC)
- Your link to "Orthogonal Arrays" didn't work because you incorrectly capitalized the initial "A" and used the plural instead of the singular. Wikipedia article naming conventions would require the article to be called orthogonal array, in the singular and with a lower-case "a" (the first letter of the article's name is case-insensitive). Orthogonal array redirects to orthogonal array testing. I've redirected your link. Michael Hardy (talk) 15:57, 28 July 2009 (UTC)
is it statistically possible to tell which is the world's best poker bot? whether it's better than the world's best Poker player? who the latter even is?
[edit]Is it statistically possible to objectively rank the poker bots of the world, or at least find the top of the rank? (I mean on standard hardware, in a standard configuration?)
Three reason I can think of for why it wouldn't be:
1) It might take a prohibitive number of hands to reach statistical significance. (Is this true for anything else we COULD statistically distinguish, if it didn't take so long or be so expensive because of the necessary sample size? I don't know statistics...)
2) poker bots could include hand-coded behavioral logic in such a way that
- program A is really good at beating program B, as the coder of A has studied and coded for B's behavior (but not vice versa).
- B's programmer hasn't studied A's code, but he has studied C's code and his bot beats C.
- While C's programmer hasn't studied B, he has studied A, and C beats A.
So A beats B beats C beats A. Which is the best? Also, the bots could consistently beat other bots for other reasons than their programmer having studied them. It could just look like a directed graph for whatever reason, where each edge you can determine with as much statistical confidence as you want (A really, we are very confident, beats B), but the overall graph is useless for ranking or coming up with a "best". And all this is even assuming that it is possible to statistically compare two bots!!! My other suspicion, in the absence of this, is:
3) Maybe poker bots going at it against each other look like random number generators going at it against each other. In this case one might still win on a particular run, but you can't statistically distinguish which is better.
This leads me to my question: when a poker player wins a tournament, are the tournament conditions such that the "best" player is statistically chosen with high confidence from among all the players (say 1300 of them!) Or, as I suspect, is it mostly luck, meaning that if you WENT BACK IN TIME and redid the tournament (so that the players don't have the knowledge gained from the tournament) you wouldn't have the same rankings. (maybe if you redid the whole universe 1000 times with different random hands, in the 1000 results one of the same 5 players would win almost all of the time, but which of the 5 players would be split... also maybe in 50 of the 1000 universes a non-"best" [the 5 players one of whom "should" win] ends up winning because of luck?)
Does anyone here know statistics??? Thank you for any help you might have in whether there is any statistical confidence in evaluating poker players and pokers. 82.234.207.120 (talk) 18:03, 28 July 2009 (UTC)
above question simplified: "Does a poker tournament reach statistical significance"
[edit]Does a poker tournament reach statistical significance in telling apart players' abilities? 82.234.207.120 (talk) 18:12, 28 July 2009 (UTC)
- Statistical significance must be specified. For example, one can speak of "5% significance," "1% significance," or any other level of significance. The number of games players play in a tournament may be adequate for distinguishing significance at a higher level but not at a lower level. Wikiant (talk) 19:04, 28 July 2009 (UTC)
obviously I mean something reasonable. Skill may be such a small factor that in a typical tournament you have 52% confidence that #1 player is really better than #2 player. If they went at it heads up for 8 days, you would reach 75% confidence that it was in fact #2, another week or two for 95%/98%/99% -- the levels we reserve the term "statistical significance" for, and which in my example could really be #1 despite the initial impression at 75% confidence that #2 was better. So...is that the case?82.234.207.120 (talk) 19:32, 28 July 2009 (UTC)
- The answer depends on a number of things that I'll attempt to simplify away. Let's suppose you want to test the hypothesis that Player A is better than Player B at a 25% confidence level (i.e., the probability of you erroneously rejecting the hypothesis is 25%). Suppose that your performance metric is the proportion of hands won, that the two players play only against each other (i.e., the number of hands played is the same for the two players, and Player A's % of hands won is one minus Player B's % of hands won), and that every hand results in a winner and loser. If Player A wins 51% of the hands to Player B's 49%, then you'll need 570 hands to achieve the desired significance. If Player A wins 52% to Player B's 48%, then you only need 140 hands. If Player A wins 53% to 47%, then you need 63 hands. Wikiant (talk) 12:41, 29 July 2009 (UTC)
- Thanks, this is the kind of analysis I was looking for! I missed it initially. Let me know if you're still following this as I have further questions but don't want to bother if you're not looking at this. Thanks. 82.234.207.120 (talk) 08:14, 30 July 2009 (UTC)
- Yes, I'm here (though you may want to start a new thread at the bottom of the list). Wikiant (talk) 19:01, 30 July 2009 (UTC)
Found seome evidence
[edit]some evidence from Greg Raymer:
“ | At the 2004 World Series, Raymer defeated David Williams to win the $5,000,000 first prize in the $10,000 no limit Texas hold 'em WSOP main event. [...]
As defending champion, Raymer went on to finish in 25th place (out of 5,619 entrants) in the 2005 WSOP Main Event, earning him $304,680. No player since Johnny Chan has came so close to repeating as World Series champion, and his deep finish has been described by his peers as one of the most incredible achievements in poker. (empthasis added |
” |
What?? No "champion" of any year comes even close to being champion the next? Isn't this very strong evidence that a poker tournament like the World Series does not reach ANY statistical significance that the winner is better than the runners-up, let alone the best of them? 82.234.207.120 (talk) 19:41, 28 July 2009 (UTC)
- There is an enormous amount of luck in no-limit tournament poker. The winner will, by the end, usually have gone "all in" several times. Any of those times it is possible that someone with a better hand will call them and they will be out. I would say that you need to be very good to do well a large poker tournament (smaller ones you can win by fluke) but being very good doesn't guarantee that you will do well. --Tango (talk) 19:49, 28 July 2009 (UTC)
but this clearly isn't true of a chess tournament, wouldn't you agree? Chess tournaments leave us with great statistical confidence that the winner would win if we went back in time and redid the tournament, isn't it so? In fact, it would be preposterous if the world champion of chess could NEVER successfully defend his title from year to year. On the contrary, the world's best chess player defends his title constantly. And often there is a period of about a decade or more with clearly one "best" chess player. So, poker tournaments fail to choose one where chess tournaments succeed. The question is "why"? Is it htat they aren't long enough (it would take 1000 days to find the true "best" [and not just luckiest] poker player?) or some other reason? Thanks for your input. 82.234.207.120 (talk) 20:51, 28 July 2009 (UTC)
- There is no random element in chess (in the lingo [if I can remember it correctly], the players have "perfect and complete information"), so would expect more stability in the results. It's not a matter of the number of hands for tournament poker since the best player may go all-in on the first hand with a 99% chance of winning and lose and no matter how many more hands you play, the best player can't win the tournament. In fact, it's really the opposite of what you say. If you had a really long knockout chess tournament where you have to win, say, 1000 matches in a row, there is a very good chance that the best player will have one bad match and go out. You could be pretty sure of getting a very good winner, but not necessarily the best, just as in poker. --Tango (talk) 00:29, 29 July 2009 (UTC)
- There is certainly a random element in chess, such as opening preparation. Maybe you are playing Black and you are very good at defending king pawn openings and not so good at queen pawn openings. OK, if you are lucky, your opponent opens with the king pawn. And he did that because he happened to see someone else analyzing that opening the previous night, etc, which is quite random. Or maybe you have a chance lapse of attention at just the wrong time in the game and miss a variation. There is a body of theory of chess ratings whose basic idea is that every player has a "strength" which is a normal probability distribution N(r,sd) associated with that player. For a strong amateur player ("Bob") you might have r=2000 ("expert" rating) and sd=50. "Alice" might have r=2050 and sd=75 (higher sd because she plays with a riskier style). You can model a game between Alice and Bob by drawing a number A from Alice's distribution and B from Bob's distribution, which are their performance levels for that game. Then if |A-B| >= 100 or so, the higher number wins. If |A-B| < 100 then the game is a draw.
For poker it should be possible to do the same thing. On the big name poker servers (I don't play myself so I'm going by what I've heard), bots are aggressively pursued and banned, but on some other servers they are welcomed. They can apparently beat some fairly good human players but not the very good ones. There are probably some WP articles with more info. 70.90.174.101 (talk) 02:57, 29 July 2009 (UTC)
- There is certainly a random element in chess, such as opening preparation. Maybe you are playing Black and you are very good at defending king pawn openings and not so good at queen pawn openings. OK, if you are lucky, your opponent opens with the king pawn. And he did that because he happened to see someone else analyzing that opening the previous night, etc, which is quite random. Or maybe you have a chance lapse of attention at just the wrong time in the game and miss a variation. There is a body of theory of chess ratings whose basic idea is that every player has a "strength" which is a normal probability distribution N(r,sd) associated with that player. For a strong amateur player ("Bob") you might have r=2000 ("expert" rating) and sd=50. "Alice" might have r=2050 and sd=75 (higher sd because she plays with a riskier style). You can model a game between Alice and Bob by drawing a number A from Alice's distribution and B from Bob's distribution, which are their performance levels for that game. Then if |A-B| >= 100 or so, the higher number wins. If |A-B| < 100 then the game is a draw.
Book recommendations?
[edit]I hope this is the right category.
I've always been a logical person, but usually intuitively logical. Lately I want to understand the philosophy of logic, how to think rationally, and how to apply logic and statistics to my own thinking and my life. Unfortunately, a lot of the web materials out there require a lot of background knowledge and understanding that I just don't have. For instance, the great blogs Less Wrong and Overcoming Bias have some extremely interesting discussion topics, but sometimes they just absolutely lose me.
Right now, I'm eighteen and in high school. My math skills are limited to algebra, very basic graphing (y=mx+b is about the limit of my understanding) and a very small amount of trig. I don't understand quadratics or polynomials or a lot of trig or geometry, and I still haven't taken any course that covers functions, or calculus. I know what set theory and formal logic are, and maybe a little bit about them, but not enough to say, explain it to my thirteen-year-old brother.
With that in mind, what would be a good, clearly-written, layman-oriented primer on some of these subjects, especially ones like the Less Wrong blog covers, but also basic statistics and stuff like that? Having some book to learn this from would make it so much easier than aimlessly clicking links and throwing half-assed, vaguely-written quries at Google only to get answers so academic I barely comprehend the page titles.
Again, many apologies if I didn't put this in the right place, but it didn't feel likely that I'd get good responses in any other category.
--✶♏ݣ 20:20, 28 July 2009 (UTC)
- I think that it is nice that you are so interested in logic, so I am more than happy to help out. First of all, everyone uses logic all the time. For instance, "the cost of these bananas is too high" => "I will not buy them" is one possible logical implicaion at the shop. However, the philosophy of logic is something more "advanced". One need not know calculus to understand this, but I think that a satisfactory understanding of set theory is necessary (in fact, axiomatic set theory is something very philosophical in nature). In the context of mathematics, the logical implications that a mathematician makes often involve sets and order structures, and thus set theory and order theory are two "fundamentals" of mathematics.
- With regards to your question, I do not think that you can apply logic to your real life, more than what you currently do, unless you learn philosophy. Philosophy is perhaps the best name for "logic in daily life" but of course, there are other ways to apply logic in daily life. Philosophy, however, is something a book cannot completely teach you. If you continue thinking about logic, and applying it in real life, that is the best preparation apart from reading a book on the subject. The article philosophy may be of some use. Hope this helps. --PST 01:15, 29 July 2009 (UTC)
- On the other hand, the question regarding why people make particular logical implications, lies in the realm of psychology. --PST 01:17, 29 July 2009 (UTC)
- I've read a bit about both subjects, but information on the Internet (especially about philosophy) tends to be either highly academic or highly polarized and messy. If you look up "epistemology" for instance you're certain to get a lot of postmodern crap about how there is no truth or logic. What I'm really looking for is a book, in particular, or maybe specific very good web resources. I already know "the sorts of things" I should read about but the recommendations I want are particular texts about them. --✶♏ݣ 02:03, 29 July 2009 (UTC)
If you don't mind a rather old book, start with How to Lie with Statistics by Darrell Huff. Of course it's really about how to tell when someone is lying to you with statistics. 70.90.174.101 (talk) 03:49, 29 July 2009 (UTC)
- That sounds good! Anyone have anything else? --✶♏ݣ 15:29, 29 July 2009 (UTC)
- Regarding set theory and logic, there is "Proofs and Fundamentals" by Bloch which is an excellent high-school level introduction to set theory, logic, group theory and writing proofs; the kind of fundamental "abstract mathematics" that is eschewed for statistics and calculus at that level. There's also "Classic Set Theory" by Goldrei that is geared towards self-study and doesn't require much background, just effort. Everything you learn in "Proofs and Fundamentals"- functions, sets and logic will be used. There are some problems requiring analysis, but they're scarce. It's beautifully written, however. You could also do well to pick up a copy of Serge Lang's "Basic Mathematics" which covers everything you should have learnt prior to starting calculus and other avenues of math. It's worth working through that book once, even if you don't like his style. 94.171.225.236 (talk) 17:38, 29 July 2009 (UTC)
If one of the things you're interested in learning is geometry, C. Stanley Ogilvy's book Excursions in Geometry is absolutely beautifully written and doesn't require you to know much at all before you read it. It's surprising how little you need to know to read the book and how much you learn from it fairly quickly. A very fun book. (It is intensely hated by the sort of students whose reason for taking a math course is that it's required of them. I think maybe they want someone to give them algorithms for solving all of their homework problems and they want their work in the course to consist only of applying those algorithms.) Michael Hardy (talk) 21:45, 29 July 2009 (UTC)
- Books about mathematical logic and set theory (a few were recommended above) IMO will be useless for your purposes at this point. Later on you might want to study some logic and model theory, e.g. Tarski's definition of truth. But, these formal systems don't have that much applicability to everyday life, and the subject is overrated by dilettantes. (I guess I'm one of the latter).
- Statistics might help you figure out the rational response to something, but if you want to understand the crazy things that people around you do all the time, rationality isn't really involved. You might instead read about behavioral economics and related topics like prospect theory. Bruce Schneier's book Beyond Fear might be in the vein that you're interested in. I have a copy but haven't read it yet.
- I took a look at the Less Wrong blog and it is pretty interesting, and Darrell Huff's book is maybe too elementary by comparison. It is at high school level or so, which is why I thought of it.
- Although it's not even slightly mathematical, you might like Zen and the Art of Motorcycle Maintenance, a popular psychobabble/pop philosophy book from the 1980's or so. It discusses (among other things) various kinds of cognitive errors people make.
- For a more traditional probability textbook, someone here at RD recommended this a while back, but you'll need to know a little more math than you do. Assuming you're about to enter college, a semester or two of freshman calculus should be enough. 70.90.174.101 (talk) 10:47, 30 July 2009 (UTC)
- You might like to read up on formal logic too, which can give you an excellent starting point for understanding how people use (and misuse) logic all the time. I recommend a book called Logic: An Introduction by my fantastic Logic lecturer at uni, Greg Restall. It's on Amazon here, if you Google for it you can also get one of those Google Books previews to see what it's about. It starts assuming no background at all and will give you a great overview of the subject, tying the philosophical aspects of logic in with the formal. Good luck! Maelin (Talk | Contribs) 04:26, 2 August 2009 (UTC)
An alternate solution by radicals to the Quartic Equation
[edit]Roughly 2 years ago I inquired about whether it was well known that one could solve certain Quartic Equations by representing it as a composition of two Quadratic functions, the consensus I got from that was no, and that it wasn't particularly useful to have a non-general solution like that. Well, just in the last few days I've figured out how to solve the general case by using another 'relatively' simple technique that transforms any Quartic into one that can be represented as a composition of two Quadratics. I'm once again curious if this would be a significant result. A math-wiki (talk) 21:24, 28 July 2009 (UTC)
- Try it on (x-1)(x3-2). Does it give the cube root of 2 as one possible solution? Dmcq (talk) 22:50, 28 July 2009 (UTC)
- By "composition" do you mean "product"? If so, then over the complex numbers that is trivial, over anything else I don't believe you. Dmcq gives a good example of a quartic you almost certainly can't solve - the solutions to quadratics are at most square roots, you'll never get a cube root (assuming we're working over the rationals, which is the usual assumption). As for your first inquiry, you seem to have got the wrong answer - it is very well known and very useful, factorising a polynomial to solve it is usually the first thing people try (unless they have a computer handy), that applies to quartics as much as any other degree. --Tango (talk) 00:07, 29 July 2009 (UTC)
I'm not talking about factoring here, I'm talking about Function composition, in this case, both being Quadratics yielding a Quartic naturally, I'm quite certain it works. A couple years back I figured out how to solve an equation of the form f(g(x))=0 if I know how to solve f, and how to solve an equation of the form g(x)+C for arbitrary constant C. But for the Quartic, the middle terms had to have a certain relation, by doing a change of variables of the form , I can transform any Quartic into the correct form, this involves solving a Cubic, the Resolvent Cubic in fact. A math-wiki (talk) 00:34, 29 July 2009 (UTC)
- Sounds like you might have found your own way to do it, congratulations. However unless you had it published in a magazine it would count as Original research and so couldn't be put in wikipedia. It would also have to satisfy notability guiidelines. Thanks for pointing out the article, someone stuck in some stupid huge equations as solutions recently - I'll go off and delete them., Dmcq (talk) 10:00, 29 July 2009 (UTC)
- Sorry a bit snappy there. I've just deleted a couple of bits of peoples own work of questionable quality they stuck into articles. Solving the quartic is a done deal and the only straightforward real interest might be an article in a teacher magazine or suchlike if it is a pretty solution. Sometimes a person like David Hilbert can start up something like Invariant theory by looking at a simple problem like transforming polynomials, that rakes quite a bit of inspiration. Dmcq (talk) 10:24, 29 July 2009 (UTC)
- I remember when I solved the quartic thing, one saturday, lying in a confortable sofa and staring at the floor. The idea was: after killing the 3 degree term by a translation, you reduce yourself to a case where the sum of the roots is 0. In this situation, instead of searching the 4 roots, look for the sums of two roots as new unknowns. You will get a sixth degree equation because there are 6 possible sums of two roots of the quartic: but you also know that if s is a sum of two roots, then -s is the sum of the other two. This means that the corresponding sixth degree equation is indeed a bi-cubic, that is of third degree in s2, so the problem has been reduced to the cubic equation. Then is just a small exercise of algebra to write down the bi-cubic. I was 16, and it was also clear to me that more or less every other boy interested in maths already had had/was having/was going to have the same idea, here and there... --pma (talk) 16:34, 29 July 2009 (UTC)
- Sorry a bit snappy there. I've just deleted a couple of bits of peoples own work of questionable quality they stuck into articles. Solving the quartic is a done deal and the only straightforward real interest might be an article in a teacher magazine or suchlike if it is a pretty solution. Sometimes a person like David Hilbert can start up something like Invariant theory by looking at a simple problem like transforming polynomials, that rakes quite a bit of inspiration. Dmcq (talk) 10:24, 29 July 2009 (UTC)
I think I remember a boring stretch of algebra class involving showing how to decompose the quartic into the composition of two quadratics. I guess if you can write your version up in a clever way it might be interesting (Wikipedia is not the right place for it though, as Dmcq says). But generally, Galois theory gives a pretty complete treatment of what polynomials can be solved by radicals. 67.117.147.249 (talk) 16:36, 29 July 2009 (UTC)
To dcmq, I wasn't necessarily suggesting it be put in the article, I was curious it is or is not known, since I haven't been able to find any info on using compositions of functions as a way of solving polynomials. To pma, I'm intrigued, this is different than Ferrari's solution given in the article if I'm reading that right. I think my solution might qualify as beautiful, It takes a bit fewer steps that Ferrari's solution, here, I've give it a run through.
The first step, after dividing out the leading coefficient, is to check the coefficient condition. To see a derivation of this, goto my userpage. Under the title, Solving Polynomials with Composition of Functions.
If the condition checks out, then the solution is pretty easy, just build the composition, and setup and solve the system for the C, and D. B uniquely defines one of the other 4 coefficients, and E takes care of the last one.
Assuming though that the condition is not met, we will perform a Linear transformation by substituting for x. (note that we've already divided out the leading coefficient.)
Now we need to find such that the coefficient condition i met.
Solving this Cubic can be done as show in the Cubic Equation article, sometimes it just might happen that the solution to this Cubic is easier than the general case though.
With three values for , I would generally choose the Real one. We now have a Quartic that satisfies the coefficient condition. For simplicity I'm going to substitute for coefficients now.
To solve this Quartic, we must equate its coefficients to those of the expanded Composition, for a derivation, again, see my userpage.
Let and let then for f(g(x))=0
One interesting caveat here is C, and D. They form a linear system in b and n, after you get m's value from B. Furthermore, the system has infinitely many solutions. so we may pick an n, and derive a b or vis versa. once those two are defined E defines c. Attempting to solve the linear system in the general case was in fact, precisely how I derived the coefficient condition.
Now we need to solve f(y)=0. Why not f(x)? because our Quartic that were solving is in y, we will use the solution for this Quartic to write the solution for the original one in x.
Now we solve
One note first, in order to distinguish the independent plus-minus signs, I'm going to put different subscripts below them.
And now to solve for x,
A math-wiki (talk) 20:38, 29 July 2009 (UTC)
- It looks like your result is the same as that of Ferrari, even if your paths differ. You reduce the fourth degree equation to first solving a third degree equation and then two second degree equations. So your solution isn't really 'alternate'. Isn't that correct? Bo Jacoby (talk) 05:35, 30 July 2009 (UTC).
Depends on what you mean by alternate, if you mean the final formula no, I would imagine its the same. But that formula doesn't see much use, generally easier to work iteratively. So what I have is alternate avenue to getting a solution by radicals. A math-wiki (talk) 06:42, 30 July 2009 (UTC)
- So you have got an alternative avenue to a virtually useless formula. At the time of Lodovico Ferrari is was believed that polynomial equations in general could be solved by radicals, but the works of Niels Henrik Abel and Evariste Galois showed that this is not the case, and so the effords of reducing polynomial equations of low degree to radicals turned out to be a dead end. See Quartic equation. Bo Jacoby (talk) 07:30, 30 July 2009 (UTC).
- And don't forget old Paolo Ruffini [1]. He did everything before Abel and Galois, up to a minor gap, and what is more important, he was the first mathematician to conceive such a bold idea as the impossibility of solving the quintic by radicals. --pma (talk) 11:04, 30 July 2009 (UTC)
- Thank you. Please include your link into the article on Paolo Ruffini. Bo Jacoby (talk) 16:55, 30 July 2009 (UTC).
- And don't forget old Paolo Ruffini [1]. He did everything before Abel and Galois, up to a minor gap, and what is more important, he was the first mathematician to conceive such a bold idea as the impossibility of solving the quintic by radicals. --pma (talk) 11:04, 30 July 2009 (UTC)
I like your idea about looking at those fourth degree monic polynomial which are composition of two second degree monic polynomials, h(x)=f(g(x)). However, notice that if h is such a composition, then any translation of h is also a composition: h(x-λ)=f(g(x-λ)); and of course, if h(x) is not a composition, no translate h(x-λ) will be, for any λ. Therefore, we can't reach the case of a composition just by a substitution y-λ=x: no hope of finding such a λ, if h itself was not already a compositon. The equation in λ, should we write it carefully, is not a cubic, but has certainly to be independent of λ, and will reduce to the same condition 4BC-B3=8D you found for h to be a composition. There is another way to see it, in terms of the roots of h(x). As you said, the equation f(g(x))=0 is equivalent to: g(x)=ξ1 or g(x)=ξ2, where ξ1 and ξ2 are the roots of f(x). As a consequence, the roots of h are not a generic 4-ple of complex numbers, but if you think a little, they are such that the sum of two of them (precisely, the ones coming from g(x)-ξ1=0) is equal to the sum of the other two (the ones coming from g(x)-ξ2=0): again a translation-invariant condition, as it has to be. To compare the condition on the roots with the condition on the coefficients you found, just write the latter in terms of the roots: you will find:
- ,
which vanishes if and only if there are two roots whose sum is equal to the sum of the other two.
But remember, a linear translation changes the solution behavior, for example in Ferrari's method he eliminates the term. This causes the solutions of the new Quartic to have a sum of zero, see Viète's formulas. The change is undone at the very end by the change back from y to x. I'm doing the same thing here note that it's not but that would be the new input, and it's y I solve for, not . My choice for doesn't eliminate any of the terms but instead does exactly what you explained about the roots, so that the Quartic in y can be written as a composition, not the one for . Take a close look at these three lines from above.
And to Bo Jacoby, yes the formula is basically useless, but if someone wants to solve a 'poorly behaved' Quartic by hand, and they want a solution by radicals, they often follow something like Ferrari's method through w/ number in place of those unwieldy expressions, correct? A math-wiki (talk) 20:23, 30 July 2009 (UTC)
- The findings of Ruffini (and Abel and Galois) tells us that radicals are outdated. Express an algebraic number simply by an approximate value and an exact equation. The positive square root of two is 1.41421 ; (x2=2) . Another example: 1.46172; (5x4 = 4x3 + 3x2 + 2x + 1). Bo Jacoby (talk) 10:27, 31 July 2009 (UTC)
To pma, I believe the following will constitute a counterexample to your claim. Assuming I haven't made any mistakes.
Consider
Clearly, the coefficient condition isn't met, implies -64=-192 which obviously isn't true.
So now consider, and .
Clearly which implies, -64=-64, a true statement.
Now suppose in h(x) we make the substitution y-2=x, i.e.
Then h(x)=f(g(y)). A math-wiki (talk) 21:32, 30 July 2009 (UTC)
To Bo Jacoby, Yes your correct about the solution route being, a 3rd then 2 2nd degree polynomials, isn't that necessarily true for a solution by radicals to the Quartic due to Galois Theory? Some noteworthy differences include, I don't bother eliminating the term unlike Ferrari, rather than building two complete squares I build two Quadratics, and of Composition form instead of on opposite sides of an equality. A math-wiki (talk) 21:36, 30 July 2009 (UTC)
- I'm not an expert on Galois theory. Bo Jacoby (talk) 10:27, 31 July 2009 (UTC)
- I'm sorry, your argument is wrong. The mistake is that, after writing correctly the equation for λ, you did not realize that in fact it is not an equation for λ. If you are not convinced, try expanding
- .
- You will surely find your initial . No dependence on λ. If it is 0, it is 0 for all λ; if it is not 0, it is not 0 for all λ. Indeed, if a polynomial can't be expressed as a composition of two second degree polynomials, no translation of it will be a composition; if you consider this, you do not need computations. --pma (talk) 21:46, 30 July 2009 (UTC)
Well I'll be damned, makes me wonder if I tried that when I first came up with the composition idea. Thanks for everything though. A math-wiki (talk) 23:46, 30 July 2009 (UTC)
- Don't worry. It seems you have good ideas, this is the important thing. As to making mistakes, I can tell you that sometimes a mistake turned out to be a great way to achieve a better understanding on a problem. --pma (talk) 17:55, 31 July 2009 (UTC)
Monty Hall problem
[edit]Since we know that Monty will disclose a goat why is the contestant's original chance of picking the car not one half?
Also -
What if there were two contestants? If the two thirds chance of winning by swapping was correct then since one of them winning is a certainty, we have two thirds plus two thirds is greater than one, which is not correct.
Bob Coote
Wentworth Falls NSW Australia —Preceding unsigned comment added by 203.164.18.191 (talk) 22:00, 28 July 2009 (UTC)
- Have you read the explanation at Monty Hall problem? It's because if you stick to the strategy of not changing, then your odds can never change from being 1/3, hence the odds from the strategy of always changing must be 2/3. Confusing Manifestation(Say hi!) 22:56, 28 July 2009 (UTC)
- First: Because the contestant's original chance of picking the car does not depend on if Monty will disclose a goat or not. Second: Both contestants cannot swap, so the swapping one wins with probability two thirds and the other one wins with probability one third. Bo Jacoby (talk) 23:04, 28 July 2009 (UTC).
- If they picked different doors originally they could both swap, but it makes no difference. --Tango (talk) 00:19, 29 July 2009 (UTC)
- The section Monty Hall problem#Increasing the number of doors is particularly worth a read. Recasting the problem with more doors can help change your perspective, and might overcome the situation where you know what the answer should be intellectually, but it still doesn't sit right intuitively. -- 76.201.158.47 (talk) 03:56, 29 July 2009 (UTC)
- First: Because the contestant's original chance of picking the car does not depend on if Monty will disclose a goat or not. Second: Both contestants cannot swap, so the swapping one wins with probability two thirds and the other one wins with probability one third. Bo Jacoby (talk) 23:04, 28 July 2009 (UTC).
- You can't just add the probabilities like that since the events are not independent. If the first player wins then you know whether the second player has won or lost based on whether they chose the same door, so the probability is either 0 or 1. If they choose the same door the probability of at least one of them winning is 2/3+1/3*0=2/3, if they choose different doors the probability of at least one of them winning is 2/3+1/3*1=1. That is exactly what you would expect. --Tango (talk) 00:19, 29 July 2009 (UTC)
- Tango, Adding probabilities does not require independence, but mutually exclusive events. (It is the multiplication of probabilities that requires independence). Bo Jacoby (talk) 00:32, 29 July 2009 (UTC).
- Here's another way of thinking of this, to start off you pick 1 of 3 doors, so the chances that door has the car are 1/3. And the other pair of doors has a 2/3 chance of containing the car collectively. Then the host, who knows where the car is opens one (possibly the only one, 1/3 of this) that doesn't have a car behind it, this is very important that the host knows which has the car and therefore which ones don't. The door you originally chose still only has a 1/3 of having the car behind, that's because the event of you picking said door had that chance of having the car behind it. So the pair has the 2/3 chance still as well, but one of the two that make up that pair is now open and it's the one w/ the goat, so the remaining door has the 2/3 chance of the car. A math-wiki (talk) 02:03, 29 July 2009 (UTC)