Wikipedia:Reference desk/Archives/Mathematics/2012 June 3
Mathematics desk | ||
---|---|---|
< June 2 | << May | June | Jul >> | June 4 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 3
[edit]Phase trajectories
[edit]How do you prove that a phase trajectory is closed?
For a concrete example, suppose the energy for some system, where ε is some constant. I want to prove that all phase trajectories are closed, but I don't know how. I can show that for any (positive) value of E, there's a value of x such that dx/dt is zero. But does this *prove* that the phase trajectories are closed? If it does, I don't see how. 65.92.7.168 (talk) 02:32, 3 June 2012 (UTC)
- A phase trajectory isn't necessarily closed. The pendulum energy is . Solutions for are unlimited and thus not closed. Solutions for are closed because the set is a limited, one dimensional manifold. The solutions for are stable equilibria. The solutions for include unstable equilibria, and unclosed limited trajectories. Bo Jacoby (talk) 11:34, 3 June 2012 (UTC).
- What I meant was, I would like to know how to prove that a phase trajectory is closed if it really is closed. For instance, in the example I gave above I can see that all the phase trajectories are closed just by graphing them. But I'd like to know how to prove it. 65.92.7.168 (talk) 15:46, 3 June 2012 (UTC)
- You can make the same argument for your example. There are two variables, , and one equation, which limits the possible points in R2 to a variety with real dimension 1 or less. It's also clear that the set is bounded, so it must be a closed curve, a point, or the empty set. Rckrone (talk) 20:36, 5 June 2012 (UTC)
- What do you mean by "the set is bounded"? 65.92.7.168 (talk) 23:46, 5 June 2012 (UTC)
- See our article Bounded set. Bo Jacoby (talk) 21:57, 7 June 2012 (UTC).
- Thanks. 65.92.7.168 (talk) 04:03, 8 June 2012 (UTC)
- See our article Bounded set. Bo Jacoby (talk) 21:57, 7 June 2012 (UTC).
- What do you mean by "the set is bounded"? 65.92.7.168 (talk) 23:46, 5 June 2012 (UTC)
- You can make the same argument for your example. There are two variables, , and one equation, which limits the possible points in R2 to a variety with real dimension 1 or less. It's also clear that the set is bounded, so it must be a closed curve, a point, or the empty set. Rckrone (talk) 20:36, 5 June 2012 (UTC)
- What I meant was, I would like to know how to prove that a phase trajectory is closed if it really is closed. For instance, in the example I gave above I can see that all the phase trajectories are closed just by graphing them. But I'd like to know how to prove it. 65.92.7.168 (talk) 15:46, 3 June 2012 (UTC)
Fair scoring
[edit]This question was suggested by tennis scoring. Suppose, in a certain game, if player A plays first (e.g. serves in tennis) then player A wins with probability p and player B wins with probability 1 − p. Conversely, if player B plays first then player B wins with probability p and player A wins with probability 1 − p (i.e. the players are evenly matched). A match consists of a series of games. In the first a games, player A plays first. Then player B plays first in the next b games, then player A plays first in the next b games, and so on, with players alternating playing first for b games in a row. The match finishes when one player has at least c wins and is d wins ahead of the other player. For example, in a tennis set without a tie-break, a = 1, b = 1, c = 6, d = 2. In a tennis tie-break, a = 1, b = 2, c = 7, d = 2. The scoring system is fair if both players have a 50-50 chance of winning the match irrespective of the value of p. I assume the system must be fair for both the above tennis examples (though even this is not completely blindingly obvious to me), but what other fair values are possible for a, b, c and d? What are the general conditions? 86.181.203.150 (talk) 02:58, 3 June 2012 (UTC)
- Intuitively almost no such system can be fair by your definition. You are requiring that the probability of either player winning must made a constant function of p for some choice of a small number of parameters, and further that these be natural numbers. Fairness would be possible if you were allowed to choose a specific p to achieve this (e.g. p=0.5), or to have an additional source of randomness (e.g. flipping a fair coin to choose who starts). For almost all values of p, one of the players will have a non-zero advantage for any given set of values for the parameters. In tennis, the system is chosen to make it "approximately fair", primarily through the increased values of c and d: if d≤a or d≤b−a, it is clear that one player will be certain to win unfairly when p=0 or p=1, while c helps to even things a bit by ensuring that more than a few random variables are summed. — Quondum☏ 06:47, 3 June 2012 (UTC)
- I am not convinced that your intuition is correct. I think you will find that the tennis systems are exactly fair, for any value of p, according to my definition.* I would certainly value a second (or third) opinion on that. 86.148.154.177 (talk) 11:56, 3 June 2012 (UTC) *We can ignore p = 0 and p = 1, where the match would go on forever.
- After a closer look, I must concede your point. A match is fair when terminating is restricted to when both players have started an equal number of games. Various combinations of a, b and d ensure this (e.g. a=1, b=1 or 2, d even; c is immaterial). — Quondum☏ 13:47, 3 June 2012 (UTC)
- "a=1, b=1 or 2, d even; c is immaterial" looks good! Do you think there are any other fair possibilities? 86.148.154.177 (talk) 22:00, 3 June 2012 (UTC)
- Just to be clear, d being even does not guarantee that the number of games played is even (for example, a tennis set can end 6-1 or 6-3) and obviously if the number of games played is odd then one player must have started more than the other. However, if d is even and the number of games played is odd, then the winning margin must be odd and hence greater than d and so playing one extra game couldn't affect the result. So I agree than Quondom's examples are fair. 60.234.242.206 (talk) 00:33, 4 June 2012 (UTC)
- (edit conflict with previous message) I do not understand "A match is fair when terminating is restricted to when both players have started an equal number of games." Take the case a = 1, b = 1, c = 6, d = 2. If player A wins the first 6 games then player A wins. If player B wins the first game, and player A wins the next 6 games then player A wins. However your statement is interpreted, whether referring to number of starts prior to or including the terminating game, I don't see how it can be true in both those cases. Possibly I am misunderstanding something. 86.148.154.177 (talk) 00:40, 4 June 2012 (UTC)
- I seem to be guilty of extrapolating from simple cases. If termination is restricted to when an equal number of games (including the terminating game) are started by each, the game is fair (though I do not imply that this is a requirement). My assertion about which values ensure this is not true (including about c), as pointed out. I'd have to make a more detailed analysis to determine fairness in the case of c>2, the other values being kept as for tennis. I'll give what I find later. — Quondum☏ 08:16, 4 June 2012 (UTC)
- I previously knocked up a small program that ran simulations, but it took ages and still didn't deliver accurate enough answers to be conclusive. I have now (or so I thought) figured out a recursive way to quickly calculate the probabilities to good accuracy (13 or 14 decimal places). The "a=1, b=1 or 2, d even; c anything" cases come out to 0.5 as expected, but I am getting some odd-looking results with a = 2, b = 4, c = any, d = 9, where, for all values of p that I've tried, the probability of player A winning comes out very close to 0.5 but not quite (e.g. 0.500003372970541 for p = 0.7). I assumed at first that this was loss of precision or numerical artefact, yet all the other checks I have done seem to suggest that the answer should be correct to 13 or 14 d.p. It is a bit of a puzzle at the moment as to whether this case should be exactly 0.5 or not. Possibly I have messed up in some way that I can't spot at the moment. Anyway, any insight into what the theoretical answer for that case should be would be interesting, as well as the general question. Working it out analytically is beyond my capability, I think. 86.181.201.159 (talk) 14:12, 4 June 2012 (UTC)
- (ec) I was utterly gobsmacked by my results. For simplicity, define a variable x ranging from −1 to +1, with p=(1+x)/2 (an even function of x for a score's probability is unbiased as the reversed score will have an identical probability).
- For a=1, b=1, c=3, d=2, probabilities are 3-0:(1+x−x2−x3)/8, 3-1:(3−2x+2x3−3x4)/16, A wins from 2-2:(3+2x2+3x4)/16, hence total A:1/2 (fair)
- Note that probalities for B are as above with x changed in sign (and scores reversed)), so 3-0 and 0-3 do not have the same probability (assuming p>1/2, A will win 3-0 more often than B will win 0-3), but this is exactly balanced by B winning more often 1-3 against A's 3-1.
- For a=1, b=1, c=4, d=2, we get similar results for p>1/2, with more 4-1 wins by A than 1-4 wins by B, again exactly balanced by B's 2-4 wins against A's 4-2 wins.
- I have to admit that I find this behaviour highly counterintuitive (so much for my intuition). I wouldn't know how to generalize this short of doing the math for each set of parameter values, which I would write a program to do as it is pretty tedious.
- An analytic result may be possible if it turns out that "adjacent" scores exhibit this balancing property. — Quondum☏ 14:27, 4 June 2012 (UTC)
- I've looked at simple cases with odd d: a=b=1, d=1, c=1 or 2 are unfair, and d=3, c≤3 seems pretty sure that these are unfair, looked at analytically – and I expect your sims will corroborate this. I suspect that your use of large d merely brings the "unfairness" of unfair cases down. Do you have an application that makes a fair amount of effort worthwhile, or is this for general interest? — Quondum☏ 16:54, 4 June 2012 (UTC)
- I think you are correct. It seems intuitive that larger and larger d will get fairer and fairer, and I think I just hit on a case where it was unusually close to 0.5 for not-massively-large d. Therefore, I am now inclined to trust that 0.500003372970541 number and others similar as being correct and not a numerical artefact for 0.5. I have now tested all combinations of a, b, c, d <= 10 numerically, and I have found no other fair cases, other than "a=1, b=1 or 2, d even; c anything" -- that's assuming my program is working correctly, which I would not bet my house on. This makes me hypothesise that there may be no other fair cases. To answer your question, this is for personal interest only. However, it is interesting to speculate about whether the designers of the tennis scoring system knew mathematically that their method was exactly fair, or whether they just thought "oh, probably it will more or less even itself out if we regularly switch the server". Thanks for your interest... 86.181.201.159 (talk) 23:08, 4 June 2012 (UTC)
- It has been interesting – rather an eye-opener for me. Incidentally, when I was referring to writing a program, I would have had the program calculate the coefficients of the polynomials of the analytical equations for each case rather than a probabilistic simulation; this way the program would definitively and rapidly classify millions of cases as fair or unfair. I personally doubt that the tennis match designers used an analytic approach to the statistics to select scoring systems. You may be interested in this quote: In the inaugural Wimbledon Championship in 1877, the first player to win 6 games won the set. The first player to win 3 sets won the match. It quickly became apparent, however, that the server of the 11th game of any set had an unfair advantage over his opponent and thus the idea of an advantage set, in which a player must win by at least a two-game margin, was introduced in the 1878 Championships. — Quondum☏ 05:27, 5 June 2012 (UTC)
- Thanks, that's an interesting piece of history. 86.183.0.126 (talk) 22:46, 5 June 2012 (UTC)
- By the way, the revised way I did it in the end was not using simulation but a recursive method in which I look at pr(g,w), the probability that player A has w wins after g games played. I start with pr(0,0) = 1, then I repeatedly calculate pr(g+1,*) from pr(g,*). I tot up the probabilities of each player winning along the way, and quit when the sum of those two probabilities differs from 1 by some very small amount. This is fast enough to assess thousands of cases in a feasible time, but millions would defeat it. 86.183.0.126 (talk) 22:55, 5 June 2012 (UTC)
- It has been interesting – rather an eye-opener for me. Incidentally, when I was referring to writing a program, I would have had the program calculate the coefficients of the polynomials of the analytical equations for each case rather than a probabilistic simulation; this way the program would definitively and rapidly classify millions of cases as fair or unfair. I personally doubt that the tennis match designers used an analytic approach to the statistics to select scoring systems. You may be interested in this quote: In the inaugural Wimbledon Championship in 1877, the first player to win 6 games won the set. The first player to win 3 sets won the match. It quickly became apparent, however, that the server of the 11th game of any set had an unfair advantage over his opponent and thus the idea of an advantage set, in which a player must win by at least a two-game margin, was introduced in the 1878 Championships. — Quondum☏ 05:27, 5 June 2012 (UTC)
- I think you are correct. It seems intuitive that larger and larger d will get fairer and fairer, and I think I just hit on a case where it was unusually close to 0.5 for not-massively-large d. Therefore, I am now inclined to trust that 0.500003372970541 number and others similar as being correct and not a numerical artefact for 0.5. I have now tested all combinations of a, b, c, d <= 10 numerically, and I have found no other fair cases, other than "a=1, b=1 or 2, d even; c anything" -- that's assuming my program is working correctly, which I would not bet my house on. This makes me hypothesise that there may be no other fair cases. To answer your question, this is for personal interest only. However, it is interesting to speculate about whether the designers of the tennis scoring system knew mathematically that their method was exactly fair, or whether they just thought "oh, probably it will more or less even itself out if we regularly switch the server". Thanks for your interest... 86.181.201.159 (talk) 23:08, 4 June 2012 (UTC)
Can one zero be larger than another?
[edit]As there are larger and smaller infinities, can there be said to be larger and smaller zeroes? - is a set of no cats smaller than a set of no elephants? If I subtract the set of no women from the set of all humans, do I get half-zero as the result...? Adambrowne666 (talk) 06:17, 3 June 2012 (UTC)
- The concept of "zero" is derived from the concept of "addition". If the zero is supposed to be a two-sided identity for a certain binary operation your call "addition", then the zero is unique. It does not matter what "addition" is, it may be a set union in your example. It is the property of identity (a.k.a. neutral element) which is crucial. Incnis Mrsi (talk) 08:54, 3 June 2012 (UTC)
- This triggers an interesting thought. The description of a binary operation with a two-sided identity element fits a central idempotent a of a ring R in relation to the corner ring aRa. Thus, one can almost construct a counterexample with zeros of different sizes: a (partially) ordered set of "zeros" given the "addition" operation with a series of sets in a subset-superset relationship chain (I know this "counterexample" fails in that it is not all the same set). — Quondum☏ 10:36, 3 June 2012 (UTC)
- Infinitessimal deals with a number of mathematical theories that support having quantities that are smaller than any finite quantity. And they're not without real applications, Surreal numbers for instance have been used in analysing games. Dmcq (talk) 09:03, 3 June 2012 (UTC)
- Still the short and correct and elementary answer is: No, one zero cannot be larger than another. 0=0. (Even in nonstandard analysis the nonzero infinitesimals are nonzero.) Bo Jacoby (talk) 11:40, 3 June 2012 (UTC).
- Something else that might interest you is how one limit can approach zero more quickly than another. Let's say you have two limits, 1/x and 2/x. As x goes to infinity, both approach zero. However, the second number is always twice as much as the first, for any given x. StuRat (talk) 23:33, 3 June 2012 (UTC)
Thanks, all, for the interesting responses - I understand some of them just barely - I gather Incnis's response is the clincher - the thing is, I would have read Bo's response and think, in my stupid way, 'well, that's just dogma; I'm a magnificent iconoclast who sees beyond such stuff' - so despite that Bo is trying to be helpful, it leaves me room to be arrogant - whereas Incnis's reply shows me where Bo's 'dogma' comes from - the two work in concert v well. I hope it's not too frustrating to you guys to have someone unschooled in your language post questions here - of course, my question was slightly sciencefictiony and facetious - just that zero and infinity seem to share quite a few properties, I thought maybe variation in size might be another one in that list. Adambrowne666 (talk) 23:40, 4 June 2012 (UTC)
- @Adambrowne666 If you hadn't included your line of thought leading to your question, we may have well thought it was silly, but you supported it admirably. Lots of ideas in mathematics come about that way. If you've ever seen pairs of dual notions then "turning a question upside down" is sort of what happens. Rschwieb (talk) 11:05, 5 June 2012 (UTC) - thanks, Rschwieb, that's nice of you to say so Adambrowne666 (talk) 02:26, 8 June 2012 (UTC)
- You might like absolute infinity with its idea of an infinity greater than any transfinite infinity, a bit like zero being absolute zero instead of being an infinitesimal though we've a better handle on that. Dmcq (talk) 11:59, 5 June 2012 (UTC)
- I do, yes, the infinity of infinities, thanks, will check it outAdambrowne666 (talk) 02:26, 8 June 2012 (UTC)
- I don't really see these things as parallel at all. Infinitesimals are an extension to the real numbers, and what we might think of as the infinite quality of the real number zero is mostly embodied in the fact that the reciprocal function blows up at zero. On the Riemann sphere, this is expressed as saying that 1/0 is (complex, unsigned) infinity.
- However that infinity has nothing to do with transfinite ordinals or cardinals. Absolute infinity is more like a bound on the transfinite ordinals than it is like a point on the Riemann sphere. So really I don't think the analogy works. --Trovatore (talk) 17:17, 8 June 2012 (UTC)
- I do, yes, the infinity of infinities, thanks, will check it outAdambrowne666 (talk) 02:26, 8 June 2012 (UTC)
Error due to "flat earth" approximation?
[edit]Maybe I should have posted this under Science, but since it's really just applied math, I'll post it here.
If using a "flat earth" approximation when surveying land, about how big are the errors caused by this approximation? What I mean is this: I have a survey map of some land. It uses the metes and bounds system. If I interpret the "metes" as vectors, I notice that the vectors don't quite add up to zero. There is an error of a few inches. Maybe it's a data entry error on my part, but anyway, I'm just wondering what kind of tolerance there should be for this sort of thing. 75.37.236.219 (talk) 11:28, 3 June 2012 (UTC)
- Errors due to curvature of the earth are likely to be swamped by the non-planar nature of even the "flattest" piece of ground, unless you are dealing with a salt flat, for example. Even then, the piece would have to be several miles in size for the earth's underlying curvature (for a radius of c. 4000 miles) to be significant. The closure error on your survey will be due to measurement errors. A search on "surveying closure error" will give some approaches to reconciling things.109.148.243.127 (talk) 15:22, 3 June 2012 (UTC)
- What is "significant"? One inch? Six inches? Ten feet? Please clarify. 75.37.236.219 (talk) 16:37, 3 June 2012 (UTC)
- Assuming a perfect sphere for the earth, two points 1 mile apart along the curved surface are only 0.999999997 miles apart in a straight line, the difference in the two lengths being about 2 ten-thousandths of an inch. A bit more detail: if two points on the surface of a sphere of radius r subtend an angle theta at the centre, then the curved distance is r.theta and the straightline distance is 2.r.sin(theta/2). The piece of land would have to be many miles in size for the difference in distance to be as much as 1 inch.109.148.243.127 (talk) 18:05, 3 June 2012 (UTC)
- At 1km from the tangent point where a plane intersects the Earth's surface, the height difference is about 6371(1-cos(1/6371))km or about 3 inches. -Modocc (talk) 20:39, 3 June 2012 (UTC)
- One comment to add is that, while the deviation per mile or kilometer is small, depending on the type of surveying you do, this could be cumulative. So, if you're only off a tenth of an inch for one property, it's no problem, but if you use the ending boundary of each as the starting point for the next, and measure, say, 100 properties in a row, each with a 1/10th of an inch deviation in the same direction, now you're off 10 inches, and this begins to be a problem. To avoid this, you should try to go off a fixed reference each time, rather than measuring from iffy boundary lines. StuRat (talk) 23:29, 3 June 2012 (UTC)
How do I calculate date of birth knowing how old someone was on certain dates?
[edit]I am doing a bit of genology research. There is census data that shows the age of particular person when the data was colected. To give an example: Census 1. takes place in 1795, the person is 14 years old; 2. takes place in august 1811, he is 29; 3. on 10 february 1816 he is 33; 4. 1 august 1826 - he's 43; 5. 1 march 1834, the guy is 51. Simply substracting age from year will give varied results (from 1781 to 1783 in this case). I am not that good at maths, I would probably use some online age calculator, but it occured to me that using all data points the estimation would be more accurate. And also there are some other considerations - some of censuses are dated with a particular date, some are not, some people have their ages listed in fractions (say person is 9 3/4 i.e. 9 years and 9 months old) and all the dates are old style. ~~Xil (talk) 22:46, 3 June 2012 (UTC)
- Certainly the more precise figures are preferable. Just listing an age, in years, during a given year, provides almost 2 years of wiggle room. For example, if somebody is listed as 1 in 2012, they might have been almost 2 on Jan 1, 2012, meaning they were born, say, on Jan 2, 2010. Or, they might turn 1 right on December 31, 2012, meaning they were born December 31, 2011. Also, some people will round up, so list their 7 month old as age 1, making things even worse. And this doesn't even consider those who are intentionally lying about ages. StuRat (talk) 23:21, 3 June 2012 (UTC)
- The data provide boundaries for your estimate.
- On some date in 1795, NN's age was 14; this could mean he turned 14 on the day of the census, or 15 the next day, or anything between. The census could be on any day of the year. So his birthdate could be 1780-Jan-02, or 1781-Dec-31, or any day between.
- Age 29 in 1811 Aug similarly gives a range 1781-Aug-02 to 1782-Aug-31.
- Age 33 on 1816-Feb-10 gives 1782-Feb-11 to 1783-Feb-10.
- Age 43 on 1826-Aug-01 gives 1782-Aug-02 to 1783-Aug-01.
- Age 51 on 1834-Mar-01 gives 1782-Mar-02 to 1783-Mar-01.
- Unless one of us has blundered, we have a problem: he can't be born after 1781-Dec-31 (else he couldn't turn 14 before the end of 1795) or before 1782-Aug-02 (else he'd be 44 on 1816-Aug-01). But I hope you see the principle. —Tamfang (talk) 23:36, 3 June 2012 (UTC)
- Not all 5 data points can be true, because Point 1 is consistent only with Point 2. But there is a small sliver of time where Points 2 to 5 are all true. These 4 data points are all consistent with the birth occurring between 2-31 August 1782. This would mean that in 1795, depending on exactly when, he was 12 or 13, not 14. -- ♬ Jack of Oz ♬ [your turn] 23:52, 3 June 2012 (UTC)
- Can't this discrepancy be expained by the fact the dates are old style? The censuses are consistent with each other in giving age on previous census. However the 1795 census data has date mising, it is possibe the actual census itself took place on a bit different date. Anyways - I am already aware that by simply substracting the estimiate can be off by two years. What I am going for is that each data point limits the posibilities to the point where we can say that, if we exclude the first data point, he was born in August 1782. And I need to do this for several persons, so figuring out a way for computer to do it is what I need, for which I need a formula and I am not really that good at maths ~~Xil (talk) 02:05, 4 June 2012 (UTC)
- I double checked the ages are right, the 1795 census lists age at previous census (which apperently took place in 1782) as well - then he was one. At any rate I need to reduce the estimiate to within a year, because I have a ton of records with similarily named people on a spreadsheet that I want to sort by birth year to identify different individuals. One person having different birthdays creates a mess, I could of course take a guess, but some records acctualy have the real birth date, so I might not notice I missed ~~Xil (talk) 03:05, 4 June 2012 (UTC)
- Can't this discrepancy be expained by the fact the dates are old style? The censuses are consistent with each other in giving age on previous census. However the 1795 census data has date mising, it is possibe the actual census itself took place on a bit different date. Anyways - I am already aware that by simply substracting the estimiate can be off by two years. What I am going for is that each data point limits the posibilities to the point where we can say that, if we exclude the first data point, he was born in August 1782. And I need to do this for several persons, so figuring out a way for computer to do it is what I need, for which I need a formula and I am not really that good at maths ~~Xil (talk) 02:05, 4 June 2012 (UTC)
- Not all 5 data points can be true, because Point 1 is consistent only with Point 2. But there is a small sliver of time where Points 2 to 5 are all true. These 4 data points are all consistent with the birth occurring between 2-31 August 1782. This would mean that in 1795, depending on exactly when, he was 12 or 13, not 14. -- ♬ Jack of Oz ♬ [your turn] 23:52, 3 June 2012 (UTC)
- Where was old style (Julian) dating still in use after 1752? —Tamfang (talk) 03:45, 4 June 2012 (UTC)
- Sweden (it finally managed to convert in February 1753 after crazy goings on for 52 years), Russia, Greece, Japan, Korea, China and probably parts of the Ottoman Empire. -- ♬ Jack of Oz ♬ [your turn] 04:10, 4 June 2012 (UTC)
- In this case it is from the "revision lists" of the Russian Empire. ~~Xil (talk) 05:00, 4 June 2012 (UTC)
- Sweden (it finally managed to convert in February 1753 after crazy goings on for 52 years), Russia, Greece, Japan, Korea, China and probably parts of the Ottoman Empire. -- ♬ Jack of Oz ♬ [your turn] 04:10, 4 June 2012 (UTC)
- Here's your formula. Given an integer age and a range of possible recording dates, if we assume that the age was rounded down to N years on the recording date, EARLYDATE is N+1 years before the day after the first possible recording date, and LATEDATE is N years before the last possible recording date. Each of your age records will give you its own EARLYDATE and LATEDATE. The actual birthdate (if all assumptions are valid) lies between the latest EARLYDATE and the earliest LATEDATE, inclusive.
- With fractional ages, you'll need to decide during what part of the year a child's age is expressed as 9½ ... —Tamfang (talk) 05:40, 4 June 2012 (UTC)
- I wouldn't mind having more exact date where possible, if you go only by years you could end up with wrong result if it lands somwhere near the new year. Just saying that it doesn't need to be extremely accurate and 1795 data point, which has exact date missing, might still be taken into account. And anyways I don't see how this is something to feed to my computer, instead of doing logic myself. So it would take a formula to calculate date x years (and maybe also x months prior) and if I get the diffrence between two calendars right, it should throw in an extra day on turn of the century. And then something to automaticaly estimiate latest early date and earliest late date ~~Xil (talk) 07:54, 4 June 2012 (UTC)
- I suspect that the problem isn't the data, but that applying N or N+1 apporach like this is wrong. We assume that ranges are from N to N+1 years ago on certain date. However to plot ranges we acctualy need to use first all N dates and then all N+1 dates and see where the ranges intersect. Thus if the person in question was N years old on date of census he was born from 1780-Jan-02 to 1782-Aug-02 or if he was N+1 from 1781-Dec-31 to 1783-Aug-01, which means he was born from 1781-Dec-31 to 1782-Aug-02 i.e. the earliest late date may fall before latest early date and it still defines date range when the person was born. Does that sound right? ~~Xil (talk) 10:23, 7 June 2012 (UTC)