Wikipedia:Reference desk/Archives/Mathematics/2008 January 18
Mathematics desk | ||
---|---|---|
< January 17 | << Dec | January | Feb >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 18
[edit]Getting the function back from a table
[edit]This probably sounds weird, but are there any Wikipedia articles on the topic of "reconstructing" the function used in a table of values? For example, if I had a table:
x | 8 | 10 | 12 |
---|---|---|---|
y | 71 | 97 | 123 |
Just in case you didn't know, it means that f(8)=71, f(10)=97, and f(12)=123. I'm trying to find out what the function (f) is. I can then use a certain method to find that the function is . Is there any more information out there on this topic (the "reconstruction")?
--wj32 t/c 02:12, 18 January 2008 (UTC)
- It is, in general, impossible, because there are an infinite number of possible functions that satisfy those criteria. However, if you have further information, generally on the class of function that it comes from, then it would be possible to perform a regression to determine the function. Confusing Manifestation(Say hi!) 02:43, 18 January 2008 (UTC)
- Ah, what I mean is that the function must be polynomial (forgot to say that) and that I know what degree the polynomial is. I just don't know the coefficients. Regression is kind of what I'm looking for, except that I'm looking for the exact function. I'll just give you some background info: I know that function is quadratic, so to solve that previous problem I can use algebra:
- 64a + 8b + c = 71
- 100a + 10b + c = 97
- 144a + 12b + c = 123
And solve for a, b, and c. Then (obviously) I put those values back into to get the original function. If I generalize the process a bit I can get formulas for a, b, and c:
In these formulas n means the "start" of the data. If I was solving the previous example n would be 8. z means the step between each input of the function. So in that case z would be 2. Obviously, we know f(n) (aka f(8)), f(n+z) (aka f(10)), and f(n+2z) (aka f(12)). So, we can work out the values of a, b, and c.
I'm a bit of a newb beyond the basic math we learn at school. So, now I've got three questions:
- Has this stuff been researched before?
- Is there a general rule for these formulas that can extend to polynomials of the third or higher degrees?
- Do you even understand what I'm saying (mmm)?
--wj32 t/c 03:05, 18 January 2008 (UTC)
- Yay I found it (although it's a bit complicated...)! Polynomial interpolation --wj32 t/c 03:11, 18 January 2008 (UTC)
You might find System of linear equations more helpful. With the three equations you get from three data points, you can solve to find the three coefficients of your polynomial, as you have done. Interpolation is sort of related, but I think you more want to know how to find the polynomial itself. Yes, this sort of thing is very well researched and applies to polynomials of any degree, and all sorts of other problems. As the number of equations get large, we have more "automated" ways of solving a system of equations, such as Gaussian elimination. - Rainwarrior (talk) 05:33, 18 January 2008 (UTC)
- Thanks, I was just frustrated I couldn't find any info on what I was looking for... --wj32 t/c 06:28, 18 January 2008 (UTC)
- Guys, you've got it all wrong. What the OP is looking for is Lagrange polynomial. It is signficantly easier than solving a system of equations. -- Meni Rosenfeld (talk) 12:32, 18 January 2008 (UTC)
- Ages ago, I, too, was interested in these kinds of formulae. Of course, I had no knowledge of the Lagrange polynomial at the time (and probably little knowledge about Gaussian elimination). I tried to find the formulae using a brute-force "ad-hoc" method, and succeeded for the case of a quadratic polynomial. It looks like you have only derived the formulae for the case that the points are placed equidistantly, I happened to do it for the general case. Then I tried my luck with the cubic case. The involved calculations were enormous, I had to staple several pieces of paper in a long row just to be able to write a single line of the calculation. Eventually I gave up. Only years later have I found that all this work was unnecessary. -- Meni Rosenfeld (talk) 12:46, 18 January 2008 (UTC)
- Wow, thanks for the link! BTW how did you derive a formula for general cases? That's amazing (or at least for me)! --wj32 t/c 01:04, 19 January 2008 (UTC)
- I don't remember exactly. There's a slight chance I have kept my notes from back then, if I ever find them I'll enlighten you. I suppose it was more or less equivalent to solving the system of linear equations, but probably in a roundabout way. -- Meni Rosenfeld (talk) 12:08, 19 January 2008 (UTC)
- Wow, thanks for the link! BTW how did you derive a formula for general cases? That's amazing (or at least for me)! --wj32 t/c 01:04, 19 January 2008 (UTC)
- The reason I mentioned regression is because I am *fairly* sure that it will return exactly the right answer, since it aims to minimise the error between the fitted curve and the given points, which as long as you get the degree right should be when you retrieve the original equation, so it should be equivalent to the other methods mentioned. Confusing Manifestation(Say hi!) 02:16, 19 January 2008 (UTC)
- No doubt, but it's not "the right tool for the job". Not to mention that Regression is a dab, and Nonlinear regression doesn't seem to contain the relevant practical information. -- Meni Rosenfeld (talk) 12:08, 19 January 2008 (UTC)
- Fair enough. I've obviously been in the statistical world for too long. Although technically polynomial fitting is still a type of linear regression (where the "independent" variables are 1, x, x^2, ..., x^n). Confusing Manifestation(Say hi!) 22:44, 20 January 2008 (UTC)
- You're right, I didn't think about it this way. -- Meni Rosenfeld (talk) 09:35, 21 January 2008 (UTC)
- Fair enough. I've obviously been in the statistical world for too long. Although technically polynomial fitting is still a type of linear regression (where the "independent" variables are 1, x, x^2, ..., x^n). Confusing Manifestation(Say hi!) 22:44, 20 January 2008 (UTC)
- No doubt, but it's not "the right tool for the job". Not to mention that Regression is a dab, and Nonlinear regression doesn't seem to contain the relevant practical information. -- Meni Rosenfeld (talk) 12:08, 19 January 2008 (UTC)
probability.. helppp..!!
[edit]first of al this isnt my homework question..!! so plz help me solve this question of fuji film.. it is for my own personal understanding....!! plz try and help me answer all questions..i have been trying to solve it since 3 days but aint gettin newhere m typing down the whole case here..
In the early 1990s, Fuji Photo Film, USA, joined forces with four of its rivals to create the Advanced Photo System (APS), which is hailed as the first major development in the film industry since 35 - millimeter technology was introduced. In February 1996, the new 24 millimeter system, promising clearer and sharper pictures, was launched. By the end of the year, the lack of communications and a limited supply of products made retailers angry and consumers baffled. Advertising was almost nonexistent. Because the product was developed by five industry rivals, the companies had enacted a secrecy agreement in which no one outside of company management, including the company's sales force, would know details about the product until each company introduced its APS products on the same day. When the product was actually introduced, it came with little communication to retailers about the product, virtually no training of sales representatives on the product ( so that they could demonstrate and explain the features ), and a great underestimation of demand for the product. Fortunately, Fuji pressed on by taking an "honesty is the best policy" stance and explaining to retailers and other costumers what had happened and asking for patience. In addition, Fuji increased its research to better ascertain market positioning and size. By 1997, Fuji had geared up production to meet the demand and was increasing customer promotion. APS products were on the road for success. By 1998, APS cameras owned 20% of the point and shoot camera market.
1. As stated, by 1998 APS cameras owned 20% of the point and shoot camera market. Now it is the year 2003 and the market share might be nearer to 40%. Suppose 30 cusomers from the point and shoot camera market are randomly selected. If the market share is really .40 , what is the expected number of point and shoot camera customers who purchase an APS camera? What is the probability that six or fewer purchases and APS camera? Suppose you actually got six or fewer APS customers in the sample of 30. Based on the probability just calculated, is this enogh evidence to convince you that the market share is 40% Why or why not?
2. Suppose customer complaints on the 24 millimeter film are poison distributed at an avegare rate of 2.4 complaints/100,000 rols sold. Suppose further that Fuji is having trouble with shipments being late and one batch of 100,000 rolls yields seven complaints from customers. Assuming that it is unacceptable to management for the average rate of complaints to increase, is this enough evidence to convince management that the average rate of complaints has increased, or can it be written off as a random occurence that happens quite frequently? Produce the Poisson distribution for this question and discuss its inplication for this problem.
3. One study of 52 product launches found that those undertaken with revenue growth as tha main objective are more likely to fail than those undertaken to increase customer satisfaction or to create a new market such as the APS system. Suppose of the 52 products launched, 34 were launched with revenue growth as the main objective and the rest were launched to increase cusotmer satisfaction or to create a new market. Now suppose only 10 of these products were successful ( the rest failed) and seven were products that were launched to increase customer satisfaction or to create a new market. What is the probability of this result occuring by chance? What does this probability tell you about the basic premise regarding the importance of the main objective? —Preceding unsigned comment added by 220.225.79.210 (talk) 14:00, 18 January 2008 (UTC)
- For not being a homework problem, this reads an awful lot like one (especially because I was easily able to search and find it asked in the past elsewhere). That being said, you say you have attempted this for three days... what have you gotten so far? --Kinu t/c 22:16, 18 January 2008 (UTC)
- Curiously enough, that posting of 10 months ago contains exactly the same spelling errors: "cusomers", "enogh", "poison", "avegare", "rols", "occurence", "inplication". "tha", "cusotmer", "occuring". The probability of this result occuring by chance is less than that of a million monkeys producing the correct answer by chance. You would expect an instructor distributing this not to make so many errors and to have corrected them by now. --Lambiam 00:25, 19 January 2008 (UTC)
conversion from kilobytes and megabytes to gigabytes
[edit]I need to know how many gigabytes is 153.44 megabytes and any many gigabytes is 4812 kilobytes? Thanks!
71.145.168.69 (talk) 17:24, 18 January 2008 (UTC)
- That all depends on where the numbers come from. Strictly, a gigabyte is 1000 megabytes so 153.44 megabytes = 0.15344 gigabytes, and 4812 kilobytes = 4.812 megabytes = 0.00418 gigabytes. This will probably be correct for capacity of hard discs. However, computer memory has traditionally been measured, not in the SI units mentioned above, but in mebibytes where one mebibyte = 1024 kilobytes or kibibytes and in gibibytes where one gibibyte = 1024 mebibytes. In those circumstances, you might have to divide by 1024 each time, giving 153.44 megabytes = 0.1498437 gibibytes and 4812 kilobytes = 0.004589 gibibytes.
- Complications continue, because it is just possible that your 153.44 megabytes are really 153.44 mebibytes, which is 153.44 x 1024 x 1024 bytes, and converting this gives 0.1608935 gigabytes. Because usage varies, even between manufacturers, the only sure way is to try it and see! Sorry the answer is not simple. dbfirs 17:47, 18 January 2008 (UTC)
- In 99% of cases of common usage, the word "kilobyte" will refer to 1024 bytes (and so on), so it is safe to assume that for whatever purpose the OP is after, this is the relevant interpretation. Regarding the actual questions, Google is your friend. -- Meni Rosenfeld (talk) 18:13, 18 January 2008 (UTC)
- For RAM, yes, but for capacities of hard disk drives as quoted on data sheets by hard disk manufacturers usually not. There 1 GB is 1 billion bytes, not all of which may be user-accessible to boot. --Lambiam 00:07, 19 January 2008 (UTC)
- In 99% of cases of common usage, the word "kilobyte" will refer to 1024 bytes (and so on), so it is safe to assume that for whatever purpose the OP is after, this is the relevant interpretation. Regarding the actual questions, Google is your friend. -- Meni Rosenfeld (talk) 18:13, 18 January 2008 (UTC)
- But it's been a long time since hard drive capacities were measured in kilobytes. I think it's true that a kilobyte, specifically, is virtually guaranteed to be 1024 bytes.
- Here are some rules of thumb:
- For RAM (DIMMs that plug into your motherboard), mega and giga always mean 220 and 230. RAM is always sold in power-of-two sizes (or occasionally small multiples thereof), so the binary units are a lot more convenient.
- For hard drive capacities reported by the manufacturer, mega and giga always mean 106 and 109. For hard drive capacities reported by software, they almost always mean 220 and 230. I think the manufacturers have the right idea here and the software should be considered broken; binary units are inconvenient and useless for hard drives since they aren't sold in power-of-two sizes. People sometimes describe the smaller number reported by the software as the "formatted capacity", but that's wrong; it's the result of using different units. There is some filesystem overhead, but the "total size" reported by the software is typically the full drive size as reported by the manufacturer.
- Flash drives (CF, SD, USB keys, etc.) follow the same rules are hard drives. Bizarrely, flash memory devices usually use power-of-two multipliers (256 megabytes, 8 gigabytes), even though that makes no sense in front of a decimal unit. A gigabyte of RAM is exactly twice as much as 512 megabytes of RAM, but a 1-gigabyte flash card is only 1000/512 ≈ 1.95 times as large as a 512-megabyte flash card. This does strike me as deliberately deceptive marketing.
- -- BenRG (talk) 10:14, 19 January 2008 (UTC)
- I actually never knew that (probably because I find specifications of disk capacities less interesting than sizes reported by software). -- Meni Rosenfeld (talk) 12:16, 19 January 2008 (UTC)
- I see someone knows enough about computers to point out that to the computer, a Megabyte is 1048576=1024^2 bytes and a Gigabyte is 1073741824=1024^3 bytes, when a Hard Drive manufacturer or other electronic data storage device maker advertises say 250GB, they almost always mean 250000000000 bytes=2.50x10^11 bytes (often slightly more to make the number a power of 2, my hard drive is a 250GB drive and it has 250048479232 bytes of storage space (possibly not including the File Allocation Table which tells the computer where the data for a file is.) I would also agree that it is deliberately deceptive marketing to call a drive which has 232GB (as the computer sees it) a 250GB drive. A math-wiki (talk) 07:45, 24 January 2008 (UTC)
- I don't see how one can argue that a computer "sees" in gibibytes. The only disk size units you'd normally deal with in software are bits, bytes, sectors, cylinders, or clusters. There's nothing which always comes in integral multiples of gibibytes, so if you did measure something in gibibytes you'd need to include fractional bits, at which point you're really storing a byte count (or whatever) and calling it something else. Metric prefixes are for human beings who aren't comfortable dealing with lots of zeros. If there is a use for prefixes like mebi- and gibi- it's in human-human or human-computer interactions, and I can't see a reason in the world to use them there except maybe for sizes that tend to be exact powers of two (like RAM and hash tables and circular buffers). I can remember many cases in which I had to waste time multiplying or dividing a size by 1024 or 1048576 in order to figure out how it compared to another size. I can't remember any case in which the use of the binary prefixes saved me time or was helpful. I can't even imagine any such situation. -- BenRG (talk) 15:36, 24 January 2008 (UTC)
- I see someone knows enough about computers to point out that to the computer, a Megabyte is 1048576=1024^2 bytes and a Gigabyte is 1073741824=1024^3 bytes, when a Hard Drive manufacturer or other electronic data storage device maker advertises say 250GB, they almost always mean 250000000000 bytes=2.50x10^11 bytes (often slightly more to make the number a power of 2, my hard drive is a 250GB drive and it has 250048479232 bytes of storage space (possibly not including the File Allocation Table which tells the computer where the data for a file is.) I would also agree that it is deliberately deceptive marketing to call a drive which has 232GB (as the computer sees it) a 250GB drive. A math-wiki (talk) 07:45, 24 January 2008 (UTC)