Wikipedia:Reference desk/Archives/Mathematics/2010 June 27
Mathematics desk | ||
---|---|---|
< June 26 | << May | June | Jul >> | June 28 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 27
[edit]Speed please
[edit]After seeing all those x's and y's and things above, I fell a bit sheepish asking this here, but it will help with an edit I will (may) make to an article. What speed is an aircraft averaging in knots if it covers 10,310 ft in 33 seconds, and (2) if it covers 20,555 ft in 112 seconds? Moriori (talk) 00:03, 27 June 2010 (UTC)
- 1) 185.106303 knots 2) 108.736648 knots.––220.253.216.181 (talk) 01:55, 27 June 2010 (UTC)
- Note that you can do this with the Google calculator built into their search engine: "10,310 ft / 33 s in knots" yields "(10 310 ft) / (33 s) = 185.106303 knots" -- ToET 08:01, 27 June 2010 (UTC)
Bayesian Inference?
[edit]Suppose I flip a biased coin three times, and the result is H, T, H. I infer from this data that the probability of H when I flip the coin is 2/3. Suppose I flip the coin a fourth time, and the result is H. I now infer that the probability of H when flipping the coin is 3/4.
What I would like to know is if I can use bayesian inference to arrive at my conclusion. Let P(H) be the prior probability that I score H when flipping the coin, in this case 2/3. Let P(H|E) be the posterior probability that I score H when flipping the coin, or the probability the flipping the coin results in H given that I score H when I flip it the fourth time, in this case 3/4.
Bayes theorem states that
What does P(E|H) and P(E) represent, and what are their values in the example I have presented?––220.253.216.181 (talk) 01:50, 27 June 2010 (UTC)
- Consider a sample of n=3 flips out of a population of N=4 flips. The following table of 4 rows (for h = 0, 1, 2, 3 heads in the sample) and 5 columns (for H = 0, 1, 2, 3, 4 heads in the population) describe the odds.
4 1 0 0 0 0 3 2 0 0 0 0 2 3 0 0 0 0 1 4
- As you observed h=2, consider the third row (0, 0, 2, 3, 0). So H=2 or H=3 with odds as 2 : 3. (The impossible outcomes have odds zero). So the probability that the fourth flip will be head (H=3) is 3/5=60%. The odds are computed by the expression . If you divide by the column sum you get the hypergeometric distributions . The unconditional probabilities are and . Bayes rule for conditional probability says that
- .
- confirming that .
- Bo Jacoby (talk) 03:56, 27 June 2010 (UTC).
- The inferences you give are wrong, so there's no sense asking if they can be found with Bayesian inference. The correct conclusions can be found with Bayesian inference.
- The coin has a property p associated with it. p is the probability that the coin will land heads when tossed, and depends on the bias of the coin. p is unknown, and is thus treated as a random variable with a certain distribution. This distribution is your prior. A uniform prior is usually adequate. The density of the distribution is .
- After the coin is tossed once and lands H you update your distribution:
- Where the denominator is found by integrating over p.
- By the time you had 2H, 1T the distribution is . If you then want to know the probability of heads in the next toss, it's . If you toss it again and get a third H the distribution is , and then .
- The family of distributions which is best for this kind of problem is the beta distribution, because whenever you update the distribution still belongs to the family. The uniform distribution is a special case. The inferences you suggested correspond to a degenerate beta distribution, which essentially says that p is either 0 or 1 with equal probabilities (which is of course not appropriate). -- Meni Rosenfeld (talk) 08:41, 27 June 2010 (UTC)
- What's the justification for using the uniform prior?––220.253.216.181 (talk) 09:20, 27 June 2010 (UTC)
- It's the highest-entropy distribution on the unit interval. This can be taken to mean that it makes the least assumptions above and beyond the fact that , so it is appropriate when you really know nothing about p. However, in practice you may have some knowledge about the coin before ever tossing it - for example, you may know that coins of this type usually have . For simplicity in calculations it is best to encode this knowledge as a beta distribution if at all possible. -- Meni Rosenfeld (talk) 10:59, 27 June 2010 (UTC)
- The beta distribution is the limiting case of for . But it is unnecessary to consider a large population. To obtain the probability that the next flip of a coin is head, it is sufficient to consider a population consisting of the sample supplemented by the next flip: N=n+1. The probability that the next flip is a head is simply . Bo Jacoby (talk) 13:31, 27 June 2010 (UTC).
- It's the highest-entropy distribution on the unit interval. This can be taken to mean that it makes the least assumptions above and beyond the fact that , so it is appropriate when you really know nothing about p. However, in practice you may have some knowledge about the coin before ever tossing it - for example, you may know that coins of this type usually have . For simplicity in calculations it is best to encode this knowledge as a beta distribution if at all possible. -- Meni Rosenfeld (talk) 10:59, 27 June 2010 (UTC)
- What's the justification for using the uniform prior?––220.253.216.181 (talk) 09:20, 27 June 2010 (UTC)
Domain/Range of this function
[edit]What is the domain/range of this function?
$g(x) = \sqrt(16-x^4)$ —Preceding unsigned comment added by 69.230.55.21 (talk) 02:53, 27 June 2010 (UTC)
- The domain and range are something you specify, not derived from a formula. For instance the domain and range of 2n could be either the integers or reals or complex numbers. By the way range (mathematics) has two possible meanings, either codomain or image (mathematics). The purpose of them is to allow a set of function (mathematics) to be treated uniformly without going into the details about the individual functions.
- For general practical maths purposes I believe they may mean yet another thing where a function is considered as a set of pairs (x,y) where no two pairs have the same x, this is a graph of a function. The domain then is considered to be the set of all x such that there exists a pair with x as the first element and the range is considered to be the set of all ywhich occur in any of the pairs. They implicitly assume x is restricted to the reals unless they say otherwise. I'll assume this is what is meant by the question.
- For sqrt there is he additional problem that it does not even form a function in the modern sense if you include both positive and negative square roots. I'm not sure whether the original question meant just the non-negative square root or both.
If only the non-negative square root is meant then the 'range' is the non-negative real numbers otherwise it is all the real numbers.The domain is where the argument to the square root is non-negative so 16-x^4 is greater than or equal to 0. so x^4 ≤ 16 so -4 ≤x^2 ≤ 4. For reals x^2 is always greater than or equal to zero so 0 ≤x^2 ≤ 4 so -2 ≤x ≤ 2. So your domain is the reals from -2 to 2 inclusive. Dmcq (talk) 10:07, 27 June 2010 (UTC)- And for the struck through piece, the range probably is the possible non-negative values of the function which are 0 to 4 inclusive. Other meanings of range could include all the reals or just the non-negative reals or -4 to 4 but 0 to 4 is probably what's wanted. Dmcq (talk) 11:35, 27 June 2010 (UTC)
- And of course you could always allow complex answers.... -mattbuck (Talk) 10:38, 27 June 2010 (UTC)
- According to Square root, (or sqrt(t)) always means the nonnegative root for nonnegative t. -- Meni Rosenfeld (talk) 11:22, 27 June 2010 (UTC)
- Mr. Humanzee, your query is slightly cryptic as you formulated; no surprise, wouldn't you be satisfied with the answers you got. However, I think you mean, first of all, that is a real number, and that the "domain", is the larger set of real numbers where the expression is also well-defined as a real number, sometimes named "natural domain". This is not implicit in the question as you put it; see the answers above, but it is the most likely interpretation, given the high-school flavour of your question. That said, you need to be non-negative, that is
- that is equivalent to (is that clear to you?), and this interval is the domain.
- The Range of a function is the set of all values taken by it. In your case, takes all values between 0 and 16 as varies in our "natural domain", so that also varies between 0 and 16. Taking the square root, you get all values between 0 and 4, and in conclusion the range is therefore the interval [0,4]. As a side remark: not only you should better try to put your question into more precise terms, but it would be nice if you show some attempt to solve the problem. Otherwise, people may think you are just trying to get your homework done, while you are enjoying the panorama. --pma 17:53, 27 June 2010 (UTC)
- Natural domain, thanks, I was thinking vaguely there should be a term for that concept. There's no article on it but my guess it is notable. Dmcq (talk) 20:23, 27 June 2010 (UTC)
- Yes; I realized later that you had already answered in all detail the OP's question; anyway repeating things is not bad.--pma 22:05, 27 June 2010 (UTC)
The probability that a hypothesis is true before any evidence is presented
[edit]Bayesian inference provides a method for updating the probability that a hypothesis is true when presented with some evidence, but it requires a prior probability that the hypothesis is true before the evidence is presented. This prior probability could have been formulated using the same method with different evidence and an even earlier prior probability. Presumably there must be an original probability that a hypothesis is true before any evidence is presented at all, so what is it? ––220.253.216.181 (talk) 09:37, 27 June 2010 (UTC)
- Just make a guess and make sure it has a wide margin of error and allows for you being totally and entirely wrong. Then stick to it. Don't do the business of going back and changing your prior to say something is even less likely if you find it actually happens! Dmcq (talk) 11:04, 27 June 2010 (UTC)
- "Just making a guess" doesn't always work. If your prior was ill-chosen, the posterior may remain bad even after a lot of evidence. -- Meni Rosenfeld (talk) 11:10, 27 June 2010 (UTC)
- The thing I was thinking of about changing priors was for instance the assumption Saddam Hussein had weapons of mass destruction. After not finding any the assumption was changed to that he definitely had them and was even more cunning and evil then previously supposed in hiding them. Dmcq (talk) 11:20, 27 June 2010 (UTC)
- Sure, switching to a different prior after collecting data is wrong. My point was that from the start you should spend some time thinking about what prior accurately reflects your current state of knowledge. -- Meni Rosenfeld (talk) 11:28, 27 June 2010 (UTC)
- The thing I was thinking of about changing priors was for instance the assumption Saddam Hussein had weapons of mass destruction. After not finding any the assumption was changed to that he definitely had them and was even more cunning and evil then previously supposed in hiding them. Dmcq (talk) 11:20, 27 June 2010 (UTC)
- "Just making a guess" doesn't always work. If your prior was ill-chosen, the posterior may remain bad even after a lot of evidence. -- Meni Rosenfeld (talk) 11:10, 27 June 2010 (UTC)
- As I should have guessed we have an article on it Prior probability Dmcq (talk) 11:09, 27 June 2010 (UTC)
- Possible approaches are the principle of maximum entropy and the Jeffreys prior. Occam's razor can be applied to assign a hypothesis less prior probability the more complex it is. -- Meni Rosenfeld (talk) 11:10, 27 June 2010 (UTC)
- If there are just two possibilities, either the hypothesis is true or the hypothesis is false, and you have no way of distinguishing between these two, then prior odds are 1:1 and the probability is 1/2. Bo Jacoby (talk) 13:59, 27 June 2010 (UTC).
- Hmmm ? My hypothesis is that if I flip a coin 10 times I will get 5 heads and 5 tails. This is either true or false. There are just two possibilities. I have never flipped this coin before and I have no knowledge of whether it is fair or biased. Does that mean that the prior odds of 5 heads from 10 flips are 1:1 ? And what then are the prior odds of 6 heads from 10 flips ? Or 4 heads from 10 flips ? Are they all 1:1 ? Gandalf61 (talk) 15:51, 27 June 2010 (UTC)
- I disagree with Bo. The natural choice of prior probability when you have absolutely no evidence is a uniform distribution of the underlying random variable. Not a uniform distribution of the truth/falsity of the hypothesis. So, in your example, we assume the coin is fair and calculate the probability of the hypothesis accordingly. I'm not quite sure what to do if the random variable is distributed over the real numbers, say, where there is no uniform distribution... --Tango (talk) 16:11, 27 June 2010 (UTC)
- I think Bo was speaking literally about a case where there is no underlying random variable, just a hypothesis for which we cannot distinguish its truth from its falsity. Imagine someone you trust giving you an object and tells you that it is either a thingamajig or a doojigger, but not both. You have no idea what a thingamajig or a doojigger are. What is the probability that the object is a thingamajig? It's 1/2.
- More generally, the discrete uniform probability is maximum entropy over a finite set of items.
- In Gandalf's example the options can be distinguished - one is much more specific than the other. Likewise for "I can either win the lottery or not. Hence, equal odds."
- As for unbounded random variables - an improper uniform prior can be used, as can an "almost uniform" distribution like Cauchy. -- Meni Rosenfeld (talk) 17:40, 27 June 2010 (UTC)
- But if you were given an object and told it was either a thingamajig, a doojigger or a thingamabob and given a hypothesis "it is a thingamajig" then the natural prior probability of that hypothesis is not 1/2, it's 1/3. A hypothesis is not just an arbitrary sequence of words. Those words have meanings. When we say we have no prior knowledge we don't mean that we don't even understand the language the hypothesis is written in (if that were the case, then yes, our best effort would be to guess 50/50). I'm not sure what you mean by a case without an underlying random variable - if there isn't a random variable, then why are we talking about probabilities? In your example, the random variable is the identity of the object with the sample space {"thingamjig","doojigger"}. --Tango (talk) 18:11, 27 June 2010 (UTC)
- Again, the key part is "no way of distinguishing". In your example, one alternative is that the object is a specific type among 3 equally probable types, and the other is that the object is any of 2 types. There is no symmetry.
- I can contrive more examples where the active ingredient is not ignorance of the language used. I concede that Bo's original observation might only be usable in contrived examples.
- Ignore the part about no random variable. -- Meni Rosenfeld (talk) 19:45, 27 June 2010 (UTC)
- I think the lottery example points to the issue of assigning prior probabilities. How does one assign the prior probability of winning an otherwise unspecified lottery? In this situation you aren't told *which* lottery was played (you have no clue how many numbers were picked), or even where it was played (you can't guess which lottery might have been played). You don't even have a price/payout ratio to reason from. However, saying that winning and not winning are equally probable wouldn't be accurate, as most lotteries don't work that way (although there are some (non-governmental) lotteries which do have a 1:1 win:loss ratio). -- 174.24.195.56 (talk) 18:25, 27 June 2010 (UTC)
- One thing you can do is assign the system in question to a general reference class, and use data on that. "Ok, this is a lottery, and lotteries usually have winning probabilities of such-and-such, so I'll use a prior based on that." The result will depend on the reference class you choose, so your mileage may vary. -- Meni Rosenfeld (talk) 19:45, 27 June 2010 (UTC)
- But if you were given an object and told it was either a thingamajig, a doojigger or a thingamabob and given a hypothesis "it is a thingamajig" then the natural prior probability of that hypothesis is not 1/2, it's 1/3. A hypothesis is not just an arbitrary sequence of words. Those words have meanings. When we say we have no prior knowledge we don't mean that we don't even understand the language the hypothesis is written in (if that were the case, then yes, our best effort would be to guess 50/50). I'm not sure what you mean by a case without an underlying random variable - if there isn't a random variable, then why are we talking about probabilities? In your example, the random variable is the identity of the object with the sample space {"thingamjig","doojigger"}. --Tango (talk) 18:11, 27 June 2010 (UTC)
- (edit conflict) If you have an underlying random variable, then you do know something. The assumption was that you had no prior knowledge. Gandalf's coin has an unknown probability p for showing head, and the problem is to find a reasonable prior distribution . Then compute the prior probability that your hypothesis is true: . If you have absolutely no knowledge about the coin, except that it may produce heads or tails when flipped, then . If you can look at the coin and it looks symmetrical, then choose according to that. Bo Jacoby (talk) 17:56, 27 June 2010 (UTC).
- All you know is what the hypothesis says. It's pointless to speculate about the probability of an unknown hypothesis. --Tango (talk) 18:11, 27 June 2010 (UTC)
- Don't complicate simple cases with generality. When flipping a coin with two faces that are only superficially distinguishable, the prior odds heads:tails = 1:1. Knowledge about the performance history of the coin may changes these odds: if 8 flips gave 8 tails then the odds for the 9'th flip is heads:tails = 1:9. Bo Jacoby (talk) 21:11, 27 June 2010 (UTC).
- This is a general question. I was disagreeing with your statement that there being an underlying random variable counts as prior knowledge. Prior knowledge means some kind of evidence. The statement of the hypothesis, which is all that is required to determine what the random variable is, does not count. --Tango (talk) 21:21, 27 June 2010 (UTC)
- Yes this is a general question, and a difficult one, but many specific cases of the question are easy. What is the disagreeing about? Are we getting different answers to the same questions, or is it just about wording? All you know is what the hypothesis says you say, but what if you don't know what the hypothesis says, but all you know is that there is a hypothesis which may be either true or false? Bo Jacoby (talk) 04:08, 28 June 2010 (UTC).
- This is a general question. I was disagreeing with your statement that there being an underlying random variable counts as prior knowledge. Prior knowledge means some kind of evidence. The statement of the hypothesis, which is all that is required to determine what the random variable is, does not count. --Tango (talk) 21:21, 27 June 2010 (UTC)
- Don't complicate simple cases with generality. When flipping a coin with two faces that are only superficially distinguishable, the prior odds heads:tails = 1:1. Knowledge about the performance history of the coin may changes these odds: if 8 flips gave 8 tails then the odds for the 9'th flip is heads:tails = 1:9. Bo Jacoby (talk) 21:11, 27 June 2010 (UTC).
- All you know is what the hypothesis says. It's pointless to speculate about the probability of an unknown hypothesis. --Tango (talk) 18:11, 27 June 2010 (UTC)
- I disagree with Bo. The natural choice of prior probability when you have absolutely no evidence is a uniform distribution of the underlying random variable. Not a uniform distribution of the truth/falsity of the hypothesis. So, in your example, we assume the coin is fair and calculate the probability of the hypothesis accordingly. I'm not quite sure what to do if the random variable is distributed over the real numbers, say, where there is no uniform distribution... --Tango (talk) 16:11, 27 June 2010 (UTC)
- Hmmm ? My hypothesis is that if I flip a coin 10 times I will get 5 heads and 5 tails. This is either true or false. There are just two possibilities. I have never flipped this coin before and I have no knowledge of whether it is fair or biased. Does that mean that the prior odds of 5 heads from 10 flips are 1:1 ? And what then are the prior odds of 6 heads from 10 flips ? Or 4 heads from 10 flips ? Are they all 1:1 ? Gandalf61 (talk) 15:51, 27 June 2010 (UTC)
- If there are just two possibilities, either the hypothesis is true or the hypothesis is false, and you have no way of distinguishing between these two, then prior odds are 1:1 and the probability is 1/2. Bo Jacoby (talk) 13:59, 27 June 2010 (UTC).
This is an interesting article. Count Iblis (talk) 00:26, 28 June 2010 (UTC)
Surface area of a slice of a sphere
[edit]In a recent question posted to the Math Reference Desk, one answerer mentioned the result (attributed to Archimedes) that the surface area of a slice of a sphere formed by intersecting two parallel planes with the sphere depends only on the distance between the planes but not their locations. I thought that was a very interesting result and wondered how Archimedes might have derived it. Regardless of whether it is Archimedes' method, is there a proof for the result that uses only techniques understandable by elementary school students? --173.49.17.92 (talk) 16:03, 27 June 2010 (UTC)
- I expect Archimedes derived the result by The Method of Mechanical Theorems and proved it by the method of exhaustion. Algebraist 16:07, 27 June 2010 (UTC)
- The area is liek the volume between two spheres as shown in [1]. You can see that the result for spheres is exactly the same as the result that the area of a ring only depends on the length of the tangent to the inner circle.Dmcq (talk) 16:21, 27 June 2010 (UTC)
- Glue a stamp to the surface of the earth, and project it unto a circular cylinder touching the earth around the equator, (the projection lines being parallel to the equatorial plane and radial from the axis). The projection is shorter than the stamp in the north-south direction, but equally many times longer than the stamp in the east-west direction, due to some similar triangles. So the area of the stamp equals the area of its projection. And so the area of the sphere equals its projection upon the cylinder, which is the circumference times the diameter. Bo Jacoby (talk) 20:25, 27 June 2010 (UTC).
An estimate to the closed form expression of a function
[edit]Let A be the set of all positive even integers.
I have the following function f:A-->N, where N is the set of all natural numbers. The first few values taken by f are:
1,2,3,4,5,6,7,8,9,10,11,11,12,13,14,15,16,17,18
i.e. f(2)=1, f(4)=2, f(6)=3 and so on. What I am interested in is an estimate of what a closed form expression might look like for f. Its known that f(n)<=n/2, and the values indicate a very small shift from n/2, so small that it isnt visible in the begining. Can any give me any pointers. Thanks--Shahab (talk) 17:39, 27 June 2010 (UTC)
- How about f(n) = n/2 + [n = 2] – 2[n > 22], where [·] is an Iverson bracket? Qwfp (talk) 17:54, 27 June 2010 (UTC)
- Is the Iverson bracket usually allowed in closed form expressions? It's not one of the usual elementary functions. (Of course, you can choose whatever functions you like to consider elementary for a certain problem, but since the OP didn't specify we should assume we're using the usual functions (plus floor and ceiling, perhaps, since we're working with integers).) --Tango (talk) 18:21, 27 June 2010 (UTC)
- By the way, f(n)<=n/2 appears to be false for n=2. Qwfp (talk) 17:56, 27 June 2010 (UTC)
- yes that was a mistake. Thanks-Shahab (talk) 18:04, 27 June 2010 (UTC)
- Corrected. How should I modify it now? Actually the problem is part of a much bigger problem, a similar result gives an estimate dealing with the ceiling function. Thanks-Shahab (talk) 18:08, 27 June 2010 (UTC)
- works (that's the floor function - I don't think you'll find anything that doesn't require something like floor, except for an 18th degree polynomial, which probably isn't what you want). --Tango (talk) 18:21, 27 June 2010 (UTC)
- Thank you. Would you mind telling me how did you deduce it? -Shahab (talk) 18:33, 27 June 2010 (UTC)
- The function is n/2, minus a small error term. That error term is 0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1 - a function with a step in it like that can usually be written using the floor function (or ceiling function), so I just worked out what function would cross from being 0.something to 1.something at the appropriate point, and n/24 does so. --Tango (talk) 18:37, 27 June 2010 (UTC)
- You haven't really given enough terms, though. The difference between your function and n/2 only changes once. You can't really predict where else it will change from just one data point. I've guessed that it will change by 1 every 12 terms, but that's a complete guess. --Tango (talk) 18:39, 27 June 2010 (UTC)
- Thanks for the help and the explanation Tango-Shahab (talk) 19:16, 27 June 2010 (UTC)
- A somehow simpler guess is that the error jumps only once. Dear Shahab, have you got a definition for this function, or some more terms? --pma 19:25, 27 June 2010 (UTC)
- Hi Pmajer. I hope you are well. I am glad you are around. I'll try to describe the whole problem to you, in fact I will be glad if you (or anybody else here) can give me any general advice regarding the problem.
- Consider the equation v+x+y-z = b (b is an even natural number). It is known that there exists a least natural number r(b) such that if {1,2...r(b)} is partitioned into 2 classes arbitrarily, at least one contains a solution to the given equation. My final goal is to find out a formula for r(b). To this end, I have written a computer program (using brute force) to estimate various values of r(b) for different b, and I use the bound r(b)<= b/2 as v=x=y=z=b/2 is a trivial solution. The problem is that the program starts taking an awful lot of time to estimate r(b) as b increases because the number of possible partitions increases as a power of 2.
- Is there any general advice on how to approach this problem, any techniques etc you (or anyone) can suggest. I will really appreciate it.--Shahab (talk) 19:13, 28 June 2010 (UTC)
- You haven't really given enough terms, though. The difference between your function and n/2 only changes once. You can't really predict where else it will change from just one data point. I've guessed that it will change by 1 every 12 terms, but that's a complete guess. --Tango (talk) 18:39, 27 June 2010 (UTC)
- The function is n/2, minus a small error term. That error term is 0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1 - a function with a step in it like that can usually be written using the floor function (or ceiling function), so I just worked out what function would cross from being 0.something to 1.something at the appropriate point, and n/24 does so. --Tango (talk) 18:37, 27 June 2010 (UTC)
- Thank you. Would you mind telling me how did you deduce it? -Shahab (talk) 18:33, 27 June 2010 (UTC)
- works (that's the floor function - I don't think you'll find anything that doesn't require something like floor, except for an 18th degree polynomial, which probably isn't what you want). --Tango (talk) 18:21, 27 June 2010 (UTC)
Is this a well-known search algorithm, and, if so, what is it called?
[edit]I have some questions about a method I have thought of for finding shortest routes between 2 locations on a map:
1) Does it give the right answer?
2) Is it already well-known (which I expect it will be as it is so simple)? and if so does it have a name?
3) As this method is basically an analogue computer, does it have a digital version, ie one that can be described by a list of steps as in the Graph search algorithms articles?
The method:
a) Physically make a copy of the map with string and knots, where the knots represent locations on the map and the lengths of the bits of string represent the distances.
b) Take hold of the two knots representing the 2 relevant locations and pull them as far apart as possible without breaking anything.
c) The pieces of string that are now taut represent the shortest route.
I have looked at the Graph search algorithms, and it does not seem to be any of those.
FrankSier (talk) 23:31, 27 June 2010 (UTC)
- I've no idea about the name of graph search algorithms. However your physical method is limited by the accuracy and size of knot tying and placement and by the flexibility of cotton. IE it works for crude graphs where different paths are not very similar in lengths, but won't distinguish them well. There are mathematically precise algorithms. -- SGBailey (talk) 06:09, 28 June 2010 (UTC)
- It's been called the ball and string model in some papers and also used as an illustration of how some problems can be solved easily physically, I forget where I saw it used in an elementary demonstration, I don't think it was called the ball and string model there, a pity as I like those sorts of models. Dmcq (talk) 09:43, 28 June 2010 (UTC)
- Google Scholar (and Google web for that matter) search for "ball and string model" finds this academic paper: "New dynamic SPT algorithm based on a ball-and-string model", P Narvaez, KY Siu, HY Tzeng. IEEE/ACM Transactions on Networking 2001. doi:10.1109/90.974525. Neither the model nor the phrase is referenced in the paper so it appears they thought it up, although others may have come up with it too independently of course. Qwfp (talk) 09:57, 28 June 2010 (UTC)
- It's been called the ball and string model in some papers and also used as an illustration of how some problems can be solved easily physically, I forget where I saw it used in an elementary demonstration, I don't think it was called the ball and string model there, a pity as I like those sorts of models. Dmcq (talk) 09:43, 28 June 2010 (UTC)
You might be interested in reading about Dijkstra's algorithm. Sorry, it was already in the list that you linked. Dbfirs 12:56, 28 June 2010 (UTC)- I had another google and it looks like George Minty referred to using strings knotted together to find the shortest path in a 1957 paper when describing the problem. Dmcq (talk) 13:17, 28 June 2010 (UTC)
- Oh, so he did: Minty, George J. (1957). "A Comment on the Shortest-Route Problem". Operations Research. 5 (5): 724. doi:10.1287/opre.5.5.724. JSTOR 167474. It's not really a paper though, just a very short comment. Nevertheless, it's been cited 39 times according to Google Scholar: [2] Qwfp (talk) 13:45, 28 June 2010 (UTC)
- I think there should be a measure of citation density given by citations/pages. That would concentrate minds on cutting papers down though it might make some of them even more unintelligible :) Dmcq (talk) 14:41, 28 June 2010 (UTC)
- Oh, so he did: Minty, George J. (1957). "A Comment on the Shortest-Route Problem". Operations Research. 5 (5): 724. doi:10.1287/opre.5.5.724. JSTOR 167474. It's not really a paper though, just a very short comment. Nevertheless, it's been cited 39 times according to Google Scholar: [2] Qwfp (talk) 13:45, 28 June 2010 (UTC)
- AK Dewdney had a great popular-science-level discussion on analog computers, I think it was in The Armchair Universe, but it may have been Turing Omnibus. He mentions your algorithm and a number of others, such as spaghetti sort. (He has a clever algorithm for finding the two most distant points in a tree, too.) He also has a good discussion of the computational complexity and limitations of such algorithms. Eric. 66.30.118.93 (talk) 02:44, 29 June 2010 (UTC)