Wikipedia:Reference desk/Archives/Mathematics/2009 January 3
Mathematics desk | ||
---|---|---|
< January 2 | << Dec | January | Feb >> | January 4 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 3
[edit]Fractal line
[edit]Hi. Does a straight line count as a fractal? I know it would be a really boring example, but it looks the same no matter how much I zoom in. Duomillia (talk) 15:00, 3 January 2009 (UTC)
- A fractal has a fractional dimension. A straight line has dimension one. So considering one to be a (special case of) fraction, then yes. Bo Jacoby (talk) 15:40, 3 January 2009 (UTC).
- You know that you are likely to start another hot debeate... I personally agree with straight lines, and even points, be fractals.--PMajer (talk) 16:48, 3 January 2009 (UTC)
- The Sierpinski pyramid (which is a tetrahedron analog of the Menger sponge or a 3–dimensional analog of the Sierpinski triangle) has the Hausdorff dimension equal 2 — but it certainly IS a fractal.... --CiaPan (talk) 10:34, 5 January 2009 (UTC)
- As far as I can see, the pyramid on the picture is not a tetrahedron but a square pyramid, and it has dimension . — Emil J. 13:45, 6 January 2009 (UTC)
- Then ignore the picture and base on my words. --CiaPan (talk) 14:00, 6 January 2009 (UTC)
- Here is a Sierpinski tetrahedron. Gandalf61 (talk) 14:05, 6 January 2009 (UTC)
- OK, so here we have a tetrahedron, and the thing has Hausdorff dimension 2. Now more to the point. The dimensional definition of a fractal does not require the Hausdorff dimension to be fractional, but to be greater than its topological dimension. The Sierpiński tetrahedron pyramid has topological dimension 1,[1] hence it is a fractal according to the formal definition, despite having integral dimension. A straight line is not. — Emil J. 14:37, 6 January 2009 (UTC)
- You're right. However Bo Jacoby wrote:
A fractal has a fractional dimension.
and I called a Sierpinski tetrahedron (sorry for confusing it with a pyramid) as a counterexample to that. --CiaPan (talk) 15:50, 6 January 2009 (UTC)
- You're right. However Bo Jacoby wrote:
- A fraction is a rational number. Integers are rational. So 2 is a fraction. So the Sierpinsky tetrahedron is a fractal, and the straight line is a fractal too. Confusion is common as to whether trivial special cases should or should not be included in general definitions. Is zero a number? Is the empty set a set? Is a circle an ellipse? Mathematicians answer yes and nonmathematicians answer no. Bo Jacoby (talk) 12:03, 7 January 2009 (UTC).
- Oh, really...? So this way the Koch snowflake curve is not a fractal, because its Hausdorff dimension is irrational (log 4/log 3), so it is not a fraction.
OR possibly it is a fraction, namely a quotient of 'log 4' and 'log 3'? But this way any number is a fraction and consequently any figure is a fractal.......
CiaPan (talk) 15:33, 7 January 2009 (UTC)
- Oh, really...? So this way the Koch snowflake curve is not a fractal, because its Hausdorff dimension is irrational (log 4/log 3), so it is not a fraction.
- All depends how you define a fractal. If we use the Falconer definition given in the article then a straight line is not a fractal because it is not "too irregular to be easily described in traditional Euclidean geometric language". It also fails the "fractional dimension" definition - but then so do space filling curves such as the Hilbert curve, and the Heighway dragon curve. As these curves are generally considered to be fractals, I would say that fractional (i.e. non-integer) dimension is a "sufficient but not necessary" condition for a fractal. Gandalf61 (talk) 18:27, 3 January 2009 (UTC)
So, let's say, loosely defined a line is a fractal, but according to the strict/formal definition, it is not. I wonder, is there relevance in the distinction between "self-similar" and "self-identical"? A line is at all magnifications self identical, but not self-similar (if we define "similar" to mean close to but not exactly identical) where as something like the edge of the Mandelbrot set is similar to itself at different scales, but never perfectly identical. Duomillia (talk) 21:42, 3 January 2009 (UTC)
- Similarity has a strict mathematical defintion, that if two objects are similar, they are identical apart from scale. It is not the same as similar in the colloquial or lingual sense, as is the case with many scientific or mathematical terms. Strictly the small differences in the mandelbrot set prevent it from being correctly classed as self similar. Strictly the mandelbrot set is quasi-self-similar. Under this definition a line is debatebly self similar, depending on whether you count different segments of the line as scale replicas of each other. —Preceding unsigned comment added by 84.92.32.38 (talk) 21:17, 4 January 2009 (UTC)
- Well, that is how the term similar is used in geometry. However, the "similar" in "self-similar" has a broader definition, and can generally include any homeomorphism. For example, de Rham curves are self-similar under affine maps, but not generally self-similar under any set of uniform scalings and rotations. An object that is self-similar under a set of geometric similarities is more precisely decribed as scale invariant. Gandalf61 (talk) 09:58, 5 January 2009 (UTC)
Straight lines have been around a long time. The word "Fractal" was derived by Benoît B. Mandelbrot in 1975 from the Latin fractus meaning "broken" or "fractured". A straight line isn't broken so it's not a fractal. You can zoom in for ever on a point and that's not a fractal either. Cuddlyable3 (talk) 12:49, 6 January 2009 (UTC)
- Be very careful here as a words etymology and its meaning are in no way recquire to be consistent. —Preceding unsigned comment added by 84.92.32.38 (talk) 13:34, 6 January 2009 (UTC)
- My very careful quote mining at [2] shows what Benoît_Mandelbrot means is and is not a fractal:
- A fractal is a mathematical set or concrete object that is irregular or fragmented at all scales...
- ...the infinite sea of complexity includes two islands: one of Euclidean simplicity, and also a second of relative simplicity in which roughness is present, but is the same at all scales.
- Smooth shapes are very rare in the wild but extremely important in the ivory tower and the factory, and besides were my love when I was a young man. Cauliflowers exemplify a second area of great simplicity, that of shapes which appear more or less the same as you look at them up close or from far away, as you zoom in and zoom out. Before my work, those shapes had no use, hence no word was needed to denote them. My work created such a need and I coined "fractals."
- The last time I checked, straight lines are smooth, of Euclidean simplicity and not irregular at any scale. Cuddlyable3 (talk) 16:49, 6 January 2009 (UTC)
What is the expected value of the lowest of four numbers?
[edit]Hi, I'm thinking about how I might write an AI to play a certain board game. After thinking for a while, I realized that I would need a formula that takes a probability density function and gives me the expected value of the lowest of n random numbers. So far what I've come up with is if the number is taken from a uniform distribution from 0 to 1 then the expected value of the lowest of n numbers should be (1/2)^(n-1), but that doesn't seem right at all. Unfortunately, I found this topic somewhat difficult to google for. Thanks for any help. --Tigerthink (talk) 15:43, 3 January 2009 (UTC)
- It's (I assume you mean indipendent and uniformly distributed on [0,1]); note that it holds also for n=0, for the infimum is certainly 1. Just integrate. Do you need the details?--PMajer (talk) 16:21, 3 January 2009 (UTC)
- Thanks for your help; I have a pretty good idea of how to generalize that result to any probability density function, as was my original question. (You just split it into n+1 portions with equal integrals, right?) I would be interested in reading a proof of your answer if you happen to have one handy. And looking at infimum, it seems you should have used supremum.--Tigerthink (talk) 17:06, 3 January 2009 (UTC)
- No, I mean infimum (of the empty subset, being the maximum), so you get 1 with n=0 numbers in [0,1]. In general, just remember that the expectation of a positive real random variable Y writes
- If you then have n independent positive random variabls , even not identically distributed, and now , you have of course if and only if for all , therefore by the independence . So with n independent and uniformly distributed random variables you get that . Notice that the answer will depend on the distribution. Ouh she is going to kill me :( hope is ok --PMajer (talk) 18:56, 3 January 2009 (UTC)
- I still don't understand the infimum stuff. How could the empty subset be a maximum of anything? And if there are 0 numbers, isn't the expectation undefined? I think I understand most of the other stuff though. That integral looks like it's impossible to evaluate exactly. How did you get the 1/(n + 1) answer? Will I have to estimate it in most cases? Thanks for your help by the way. --Tigerthink (talk) 21:39, 3 January 2009 (UTC)
- The infimum of the empty set is the greatest lower bound of the empty set. Since everything is a lower bound of the empty set (it just has to be less than or equal to every element of the empty set, which is vacuously true), the infimum is just the greatest element of our space, which in this case is 1. Algebraist 23:25, 3 January 2009 (UTC)
- Thank you, I think I understand now. --Tigerthink (talk) 04:37, 5 January 2009 (UTC)
- The infimum of the empty set is the greatest lower bound of the empty set. Since everything is a lower bound of the empty set (it just has to be less than or equal to every element of the empty set, which is vacuously true), the infimum is just the greatest element of our space, which in this case is 1. Algebraist 23:25, 3 January 2009 (UTC)
- I still don't understand the infimum stuff. How could the empty subset be a maximum of anything? And if there are 0 numbers, isn't the expectation undefined? I think I understand most of the other stuff though. That integral looks like it's impossible to evaluate exactly. How did you get the 1/(n + 1) answer? Will I have to estimate it in most cases? Thanks for your help by the way. --Tigerthink (talk) 21:39, 3 January 2009 (UTC)
- The infimum of the empty set (which is I guess a topic already appeared here): just apply the definition of Greatest Lower Bound for a subset of a given ordered set X: any element of X is a lower bound for the empty set, just because it is in fact less than any element of the empty set; so the Greatest Lower Bound of the empty set in [0,1] is just the max of [0,1]. After all, this is consistent with all properties like " implies ". The only point is that , for the special subset does depend on the ordered set X, as it does for any other . Is it that strange?
- Then, if we have 0 random numbers in [0,1], some objects like their mean is undefined, but their infimum is still a well defined random variable, the constant 1, so the expectation of the infimum is 1. Anyway, that is not a major point.
- To evaluate the integral: you are right, let's say it depends on what are the so called distribution functions or DF, that is the functions (of the variable t), : for n i.i.d. random variables, with uniform distribution on [0,1], we have for all , and 0 for t>1, so and we get (I understand what you mean with n+1 portions etc, yes, you can do that way also).
- Conclusion: the general rule is that the distribution function of the minimum of n independent random variables is the product of their distribution functions. This holds without assuming positivity of course. Once you know the distribution function, you can compute expectation, moments, absolute moments etc. If we do not assume that Y is nonnegative, we only have to change conveniently the integral formula for the expectation of Y, that is
- I see now the first question has been already answered :)--PMajer (talk) 00:33, 4 January 2009 (UTC)
- Alright, I think I understand the infimum stuff. Who knew there was a situation where the greatest lower bound could be bigger than the least upper bound :-) I confess I don't understand a lot of the rest of the stuff you said (which contributes to my late reply, I decided to put off understanding it until later, heh). I think I'll use the n+1 portions technique for my program; in the meantime, can you recommend a decent free online probability textbook? (I am extremely cheap.) --Tigerthink (talk) 04:37, 5 January 2009 (UTC)
- Maybe somebody here can give a suggestion about online textbooks better than me. This [3] seems a frendly introduction. A nice introductory course is also Sinai's booklet "Probability Theory". In general, the first difficulty is the language of Probability, which is a bit excentric with respect to the rest of Maths. Anyway the choice of a book also depends on what is your background and what is your scope (mathematical phisics, economics, combinatorics and number theory). As to the stuff I said, it was only two things: first, to write as the product, which is just by the indipendence; second, the formula for in terms of , that is sometimes taken as a definition and that has a clear graphical interpretation...--PMajer (talk) 10:30, 5 January 2009 (UTC)
- OK, thanks again for all your help. --Tigerthink (talk) 19:27, 5 January 2009 (UTC)
- Maybe somebody here can give a suggestion about online textbooks better than me. This [3] seems a frendly introduction. A nice introductory course is also Sinai's booklet "Probability Theory". In general, the first difficulty is the language of Probability, which is a bit excentric with respect to the rest of Maths. Anyway the choice of a book also depends on what is your background and what is your scope (mathematical phisics, economics, combinatorics and number theory). As to the stuff I said, it was only two things: first, to write as the product, which is just by the indipendence; second, the formula for in terms of , that is sometimes taken as a definition and that has a clear graphical interpretation...--PMajer (talk) 10:30, 5 January 2009 (UTC)
- Alright, I think I understand the infimum stuff. Who knew there was a situation where the greatest lower bound could be bigger than the least upper bound :-) I confess I don't understand a lot of the rest of the stuff you said (which contributes to my late reply, I decided to put off understanding it until later, heh). I think I'll use the n+1 portions technique for my program; in the meantime, can you recommend a decent free online probability textbook? (I am extremely cheap.) --Tigerthink (talk) 04:37, 5 January 2009 (UTC)
Details:
- The probability that a random number, uniformly distributed on [0,1], is less than t, where 0 ≤ t ≤ 1, is = t.
- The probability that four such numbers are all less than t, is the power t 4 .
- The probability that the highest of the four numbers is less than t, is also t 4 .
- The probability that such a random number is not less than t, is 1 − t.
- The probability that the lowest of the four numbers is not less than t, is (1 − t ) 4 .
- The probability that the lowest of the four numbers is less than t, is 1 − (1 − t ) 4 .
- The distribution of the lowest of the four numbers is f(t)dt = d(1 − (1 − t ) 4 ) = 4 (1 − t ) 3 dt
- The mean value is ∫01 t · f(t) dt = 4 ∫01 t (1 − t ) 3 dt = 1/5.
Bo Jacoby (talk) 23:19, 5 January 2009 (UTC).
Gawsh, I thunk I gots the answer without thet thar hi falootin' edjumacated talk. Taking uniform distributions I say one number will be the highest and most likely in the middle of the range i.e. 0.5. Then there will be a second highest number most likely in the middle of the remaining range below 0.5, that's 0.25. In the same way the next number down is most likely 0.125. Then the lowest number is most likely 0.0625 = 1/16. Shucks, twernt nuthin hard ter figure out. Cuddlyable3 (talk) 12:31, 6 January 2009 (UTC)
- One number is most likely in the middle of the range i.e. 0.5. Well, it is equally likely to be any number in the range. BUT the second number need not be smaller than the first number, and so the highest of two numbers is most likely to be 1, and the smaller number is most likely to be 0. BUT the maximum likelihood estimate is not the same thing as the mean value. The mean value of the smaller of the two numbers is 1/3. See Beta distribution. Bo Jacoby (talk) 20:28, 7 January 2009 (UTC).
- Another argument (only for the uniform distribution): n random numbers (u. and in.d.) on [0,1] make n+1 random intervals; the expected length of the intervals is the same --because of the translation invariance of the Lebesgue measure. Since their sum is 1 we get in particular 1/(n+1) for the minimum. (To put in an even more symmetrical way: we may think of n+1 indipendent random points on S1, where the first is just the choice of the origin. These make n+1 arcs of equal expected length, etc.) --PMajer (talk) 21:41, 7 January 2009 (UTC)
lease money factor
[edit]I have always undetstood that to convert a "money factor" to an APR, you first had to divide the money factor by 12 (thus the annual part) and then multiply that answer by 2400 to come up with an equilalent annual percentage rate. In looking it up on your web site, the answers seem to leave out the division part and simply tell you to just multiply the number by 2400. Which is it19:47, 3 January 2009 (UTC) —Preceding unsigned comment added by 63.115.177.81 (talk)
- Are you referring to the article titled money factor? If so, it would help to say so and to link to it. Michael Hardy (talk) 20:32, 3 January 2009 (UTC)
- Mr. Hardy, thanks for answering. I apologize for not attaching the link(s) I was looking at. 2 things, #1 I can't figure out the math question you posed a+b+c+d+e+f+g+h on your site and I still can't find an answer to my question from the links I have found. Why would you NOT divide a money factor by 12 to create an "annual" percentage rate and subsequently multiply by 2400?
- It seems that (as an example) .0035 x 2400 = 8.4
- .0035 divided first by 12 =.00029166 and THEN multiply by 2400 = 6.999
- It seems to me that the correct APR in this case would be 6.999%(or basically 7%) instead of 8.4%
- Your input would be appreciated. Thanks —Preceding unsigned comment added by 63.115.177.81 (talk) 20:59, 3 January 2009 (UTC)
- Your last calc's off by a factor of 10, it would be 0.7%, not 7%. StuRat (talk) 21:19, 3 January 2009 (UTC)
- Wow, is money factor a poorly written article. Nowhere does it tell you what it is. It says it's given as a decimal, "for example .0035", but doesn't say what ".0035" tells you. So at the end of the day all I get is that when I see some money thing that's given as a decimal, it's called a "money factor"? Real useful. --76.167.241.238 (talk) 00:15, 4 January 2009 (UTC)
Money factor is a VERY badly written article. It doesn't define its terms at all. I can't tell what it says. "The finance charge you end up paying"? The only way to understand that would be to have information about rental arrangements that is not in the article. Michael Hardy (talk) 00:25, 4 January 2009 (UTC)
- I have somewhat re-written the article (which is currently nominated for deletion) and added more sources. Basically, "money factor" is half of the monthly interest rate or 1/24 of the equivalent APR (but usually quoted as a decimal, not a percentage, so numerically it is APR percentage/2400). Why half of the monthly interest rate ? Well, the amount outstanding on the car loan starts out at a value of C, the initial cost of the car, and ends at a value of F, the value of the car at the end of the lease, so the average monthly interest payment is
- where r is the monthly interest rate. The factor r/2 is called the "money factor" - I guess "money factor 0.0030" sounds more attractive than "APR 7.2%". Gandalf61 (talk) 09:11, 4 January 2009 (UTC)