Wikipedia:Reference desk/Archives/Mathematics/2009 March 5
Mathematics desk | ||
---|---|---|
< March 4 | << Feb | March | Apr >> | March 6 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 5
[edit]Length of a curve
[edit]Can I get some tips on integrating equations such as , which come up when finding the length of a curve? 72.200.101.17 (talk) 02:20, 5 March 2009 (UTC)
- These are usually the differentials of things like sin-1x, iirc. If you've got a table of standard integrals, it's likely to be on that. -mattbuck (Talk) 03:12, 5 March 2009 (UTC)
- This is a classic case of where the solution is found by thinking and working outside the box. See Trigonometric substitution. (And be sure to note that the in that article's examples is not quite the same as the in your problem.) -- Tcncv (talk) 04:31, 5 March 2009 (UTC)
The trigonometric substitution x = (√a) tan; θ will transform that one into the integral of secant cubed. Michael Hardy (talk) 00:46, 6 March 2009 (UTC)
- Why use such a substitution when you have hyperbolic trigonometry? Put (a monotonic function on the whole real line) and use together with , followed by double-"angle" formula for . — Pt (T) 11:10, 12 March 2009 (UTC)
Continued - Random Generator
[edit]This is a continuation of the questions I asked above about MATLAB, C++, and a good random number generator. How about a generator that will give me a decimal between zero and one? I know I can always use any generator and then divide the output by the maximum output possible but I really don't want to be dividing by really large numbers 20,000 times.-Looking for Wisdom and Insight! (talk) 05:54, 5 March 2009 (UTC)
- Use J (programming language). Get 20000 random numbers between 0 and 1 by typing
- ? 20000 $ 0
- Bo Jacoby (talk) 06:23, 5 March 2009 (UTC).
- If you're worried about computation overhead, keep in mind that the computation time spent on calculating the next random integer likely far dwarfs a single float division computation. Also, if you are dividing by a constant power of two, the computation time is negligible (depending on how smart your compiler is): a constant just needs to be subtracted from the exponent.
- Although I didn't comment on your discussion above, I've used the GNU Scientific Library's random number generators before, and found them easy to use and flexible. (But I did not have specific requirements for a random number generator, any one would do for me.) The documentation can be found here. Click "Random Number Generator Algorithms" to get the list of high quality RNGs implemented in the GSL. Eric. 131.215.158.184 (talk) 08:49, 5 March 2009 (UTC)
- (ec):Did you check out the Gnu Scientific Library? I assume that what you want is sampling from a uniform distribution in the interval [0..1]. The function gsl_rng_uniform does almost that, it returns numbers in the interval [0..1), i.e. it may return 0, but never 1. Internally, it performs the division that you're reluctant to do. Btw, why are you worried about 20,000 divisions in C++? If you're interested in other languages, the command in R (programming language) is runif(20000), where runif is short for random uniform. --NorwegianBlue talk 08:57, 5 March 2009 (UTC)
- 20000 doesn't matter nowdays on a computer, but if you're really worried by divides then simply precompute the inverse and multiply by that. Dmcq (talk) 13:09, 5 March 2009 (UTC)
Uniform distribution and independence
[edit]Could anyone possibly explain briefly to me how one would go about showing that if X and Y are independent, identically distributed random variables, each uniformly distributed on [0,1], and , and aren't independent?
I think it's something to do with the fact that if U is large within the given range, the ratio of X:Y is approximately 1, but I'm not sure... help! Thanks :) —Preceding unsigned comment added by 131.111.8.98 (talk) 15:15, 5 March 2009 (UTC)
- Yes, follow your idea. Consider the event U>3/2 and the event V>2, for instance, and their intersection --pma (talk) 16:09, 5 March 2009 (UTC)
- (slight topic hijack, hope it's ok) What is a good book to read about this stuff? One that would let me write a sentence like the one you (pma) just wrote. Something that explains what random variables and events are in a practical enough way to show how to do such calculations, but also mathematically rigorous enough (let's say, starting from basic real analysis) to explain what the words really mean. Thanks. 207.241.239.70 (talk) 04:45, 7 March 2009 (UTC)
Differential Equations for Chemical Engineering
[edit]I'm currently reading a little into reactor design for a project I am doing (in particular, this collection of lectures. It's going well, except for some problems with differential equations (which I only started investigating 10 minutes ago, so apologies if I'm asking obvious questions). For example, the design equation of a batch reactor is:
I am reliably informed that this is an ordinary differential equation and can be solved by seperating the variables. Hence:
So far, I'm happy with that. It then says "integrating gives":
This is where I get stuck. First of all, integrating with respect to what? As I don't understand this, I don't understand why dt becomes t and dy becomes 1 (i.e. so integrates to give ). I have a feeling I've made a complete hash of this, can anyone set me straight? Thanks. --80.229.152.246 (talk) 21:30, 5 March 2009 (UTC)
- That should be
- Separation of variables is a short-cut, but somewhat of an abuse of notation when expressed like this, as it trades on the (intentional) resemblance of Leibniz's notation to a fraction. A more rigorous approach is:
- then you integrate both sides with respect to t to get
- Gandalf61 (talk) 21:58, 5 March 2009 (UTC)
- Thanks very much. It makes a lot more sense now. --80.229.152.246 (talk) 23:34, 5 March 2009 (UTC)
- If you are familiar with group theory (which has many applications in chemistry, by the way), you will know that manipulations with the quotient operator are also an "abuse of notation" analagous to manipulating the differentials. --PST 04:17, 6 March 2009 (UTC)
- Thanks very much. It makes a lot more sense now. --80.229.152.246 (talk) 23:34, 5 March 2009 (UTC)
Calculation of Pi
[edit]Suppose I randomly pick a point inside a 1x1 square, determine whether its distance from a certain vertex is less than 1, and repeat the process n times. I then calculate pi using pi=number of hits/number of misses*4. How large does n have to be if I want pi to be accurate to k digits?
Out of curiosity, I wrote a Java program to calculate pi in this way. The first 10 million repetitions gave me 3.142, but the next 250 billion only managed to determine 1 extra digit. --Bowlhover (talk) 22:13, 5 March 2009 (UTC)
- With n trials, you get a random variable with mean π and standard deviation sqrt((4π-π2)/n)≈1.6/sqrt(n). To get k reliable digits, you want the s.d. to be well under 10−k, say under 10−k/3, so you want over 25*102k trials. So your 250 billion trials should be good for 4 or 5 digits, as you discovered. Algebraist 22:23, 5 March 2009 (UTC)
- This reminds me the story of the famous Buffon's needle. Various people enjoied the experimental measure of by counting intersections of needles thrown on the parquet. Results:
- Wolf (1850), 5000 needles, =3.15..
- Smith (1855), 3204 needles, =3.15..
- De Morgan (1860), 600 needles, =3.13..
- Fox (1864), 1030 needles, =3.15..
- Lazzerini (1901), 3408 needles, =3.141592..
- Reina (1925), 2520 needles, =3.17..
- --pma (talk) 00:10, 6 March 2009 (UTC)
What did Lazzerini do differently to the others? That can't just be good luck... --Tango (talk) 00:15, 6 March 2009 (UTC)I clicked the link... surprising how informative that can be! --Tango (talk) 00:17, 6 March 2009 (UTC)
- This reminds me the story of the famous Buffon's needle. Various people enjoied the experimental measure of by counting intersections of needles thrown on the parquet. Results:
- That's some interesting cheating. If Lazzerini had done his experiment the proper way, it would probably have taken him millions of years to get 3.141592. Even my program, which has finished 360 billion trials, is still reporting around 3.14159004, with no indication that the first "0" will become "2" anytime soon.
- Algebraist: how did you get the standard deviation expression sqrt((4π-π2)/n)? I know very little about statistics, so please explain. Thanks! --Bowlhover (talk) 06:07, 6 March 2009 (UTC)
- A single trial gives rise to a Bernoulli random variable with parameter p=π/4. It thus has mean π/4 and variance π/4-π2/16. Multiplying by 4 gives a r.v. with mean π and variance 4π-π2. Averaging out n of these (independently) gives mean π and variance (4π-π2)/n. Standard deviation is the square root of variance. Algebraist 09:03, 6 March 2009 (UTC)
- It will take me a while to learn those concepts, but thanks! --Bowlhover (talk) 08:15, 7 March 2009 (UTC)
- A single trial gives rise to a Bernoulli random variable with parameter p=π/4. It thus has mean π/4 and variance π/4-π2/16. Multiplying by 4 gives a r.v. with mean π and variance 4π-π2. Averaging out n of these (independently) gives mean π and variance (4π-π2)/n. Standard deviation is the square root of variance. Algebraist 09:03, 6 March 2009 (UTC)
- Are you using something faster than Math.random() to generate random numbers, or do you have a really fast computer? After some vague amount of time (15 minutes?) I've only got 2 billion trials... though I already have 3.1415 stably.
- I am vaguely contemplating the difficulty of writing a program that cherry-picks results: given a fixed amount of time (measured in trials), it would calculate every few million trials how close the current approximation to pi is, and calculate whether continuing running the program or restarting the program gives a lower expected value for your final error. Or maybe, a-la Lazzerini, it chooses a not-too-ambitious continued fraction approximant for pi and attempts to hit it exactly, aborting when that happens. Eric. 131.215.158.184 (talk) 07:01, 6 March 2009 (UTC)
- I'm using java.util.Random, the same class that Math.random uses, so I don't think that has anything to with it. My computer's processing speed is probably not relevant either because it is only 2 GHz, probably slower than your computer's. Maybe you're outputting the value of pi every iteration? I used "if (num_trials%10000000==0)" so that printing to screen doesn't limit the program's speed.
- You might be interested to know that I modified the program to only output a calculated value of pi if it is the most accurate one made so far. Here are the results (multiply all the fractions by 4):
Data dump 4.0 1/1 3.5 7/8 3.272727272727273 9/11 3.142857142857143 11/14 3.140534262485482 1352/1722 3.1410330818340104 1353/1723 3.1415313225058004 1354/1724 3.1416202844774275 2540/3234 3.1415743789284645 2624/3341 3.1416072990876143 6284/8001 3.1416015625 6434/8192 3.141587606581002 6540/8327 3.141589737441554 6551/8341 3.1415929203539825 6745/8588 3.1415924255132652 519694/661695 3.1415928668580926 519698/661700 3.1415928268290365 520214/662357 3.1415924928442895 520254/662408 3.1415925871823793 521535/664039 3.1415926140510604 521901/664505 3.1415926673317953 521923/664533 3.14159265069769 2757939/3511517 3.141592653304311 2758832/3512654 3.1415926537518883 2789534/3551745 3.1415926534962155 8230357/10479216 3.1415926535954957 8415806/10715337 3.1415926535873497 8493712/10814530 3.1415926535881242 31499778/40106763 3.141592653590579 41739544/53144438 3.1415926535901835 260989009/332301527 3.141592653589953 261093002/332433935 3.1415926535896586 409395251/521258223 3.1415926535898775 484979918/617495610 3.141592653589793 39509540665/50305109569
- It's neat that the last, 3.141592653589793, is accurate to the last printed digit. --Bowlhover (talk) 08:16, 7 March 2009 (UTC)
- My computer's new, but it's a low-end laptop. Hmmmm. No, I print out every 2^24 iterations. I don't do a square root, either, which is an easy mistake. Maybe I'm just impatient with not running the program long enough. Good idea with printing only the best approximation so far... it sort of "cheats" because as your current estimate drifts from being too low to too high it must pass very near pi in between. Eric. 131.215.158.184 (talk) 21:27, 7 March 2009 (UTC)
- I suspect that Lazzarini's point was to show how a delicate matter is a statistical experiment... --pma (talk) 09:05, 6 March 2009 (UTC)
- My computer's new, but it's a low-end laptop. Hmmmm. No, I print out every 2^24 iterations. I don't do a square root, either, which is an easy mistake. Maybe I'm just impatient with not running the program long enough. Good idea with printing only the best approximation so far... it sort of "cheats" because as your current estimate drifts from being too low to too high it must pass very near pi in between. Eric. 131.215.158.184 (talk) 21:27, 7 March 2009 (UTC)
- It's neat that the last, 3.141592653589793, is accurate to the last printed digit. --Bowlhover (talk) 08:16, 7 March 2009 (UTC)