Wikipedia:Reference desk/Archives/Mathematics/2013 April 29
Mathematics desk | ||
---|---|---|
< April 28 | << Mar | April | May >> | April 30 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 29
[edit]Calculating logs
[edit]I have a question that says:
- You have an amplifier that takes 50 mW in and produces 26 W out. What is the gain in dB?
So, I calculated 10log(26/.050) and got 27.16.
To double check, I put the same into Google and Wolfram Alpha. Google agrees with me while WA gave me 62.54. Which is right? And why is the other wrong? Dismas|(talk) 04:50, 29 April 2013 (UTC)
- For you calculation Google's 27.16 is correct. Logs can be calculated in different bases. Decibel are calculated using the log in base 10 you can also use natural logarithm which are in base e, or indeed any other base. Natural logs are a little more useful in higher mathematics which is why Alpha has them as a default. To put the correct formula in Alpha use 10*log10(26/.050) .--Salix (talk): 07:15, 29 April 2013 (UTC)
- Which is right? Both! :-) Computers always do what you ask them to... NOT what you want them to. :-) LOG, for Wolfram Alpha, is the natural logarithm, NOT the decimal one. Since Log(a, b) = Log(c, b) / Log(c, a), it follows that for a = 10 and c = e we have Lg(b) = Ln(b) / Ln(10). And since Ln(10) = 2.3, we have indeed that 62.54 / 2.3 = 27.16. — 79.113.217.172 (talk) 10:31, 29 April 2013 (UTC)
Thank you, Salix. That made sense. IP, your answer just confused me more. Dismas|(talk) 23:31, 30 April 2013 (UTC)
- You calculated (using Wolfram Alpha) 10 LogeX = 62.54, instead of 10 Log10X = 27.16. The ratio between the two is, according to logarithmic formulas, Loge10 = Ln 10 = 2.3. So by multiplying 27.16 with 2.3 we get 62.54. — 79.113.215.147 (talk) 21:52, 1 May 2013 (UTC)
"Yes, it continues to run forever, I'm 67% certain"
[edit]A general [deterministic] algorithm to solve the halting problem for all possible program-input pairs cannot exist.
Is it possible to make a general algorithm which produce the right result in most cases? Say, you feed it with any program-input pair and it give you the right answer in 51% of cases.
An analogy: something like "Miller–Rabin probabilistic primality test", but for programs. --Roman1969 (talk) 12:51, 29 April 2013 (UTC)
- See this CS Theory Stack Exchange question. « Aaron Rotenberg « Talk « 13:19, 29 April 2013 (UTC)
- That's a different question. The Stack Exchange question is asking about solving the halting problem on a Probabilistic Turing machine. What I belive Roman is asking about is using a deterministic Turing Machine to solve the problem, but not requiring the results to be 100% accurate. Though the link is a good one, as it shows how similar problems are solved, it doesn't completely answer the question. - My guess is that the question will be hard to answer one way or the other, as the mapping of "halting" onto program space is not well understood, and is dependent on encoding scheme. I also don't know if you can break "halting" down into a (potentially infinite) series of conditions with decreasing probablistic contribution, as has been done with the Miller-Rabin test, allowing you to test within some epsilon of a set probability. - Although things might be uselessly trivial: if you can somehow show that 51+% of possible program space is halting (or not halting), you can get your 51+% accuracy by always emiting "halts" (or vice vera), regardless of program-input pair. -- 71.35.109.118 (talk) 16:10, 29 April 2013 (UTC)
One problem is enumerating the posible program space. How can you give a statement like 51% of possible program space a precise meaning. If you could do that you could come up with a very simplistic solution: let the program run for a million iterations, if it halts fine, if it does not halt within that time say that the program never halts. This will be right some of the time and wrong some of the time. The problem here is that I don't think you could calculate the probabilities, equivalent to asking what percentage of programs halt but take more that a millon iterations. A similar question is what percentage of programs halt?--Salix (talk): 20:28, 29 April 2013 (UTC)
- This is given by Chaitin's constant, the first few bits can be evaluated for some machines, see here. Count Iblis (talk) 21:48, 29 April 2013 (UTC)
- Interesting, the uncomputability of the constant might present a problem! But seeing as one estimate of the probability is 0.00787499699... then just saying the program halts will be right more than 99% of the time.--Salix (talk): 22:17, 29 April 2013 (UTC)
- The halting probability is based on a prefix-free encoding which effectively weights each case by 2−n where n is the number of bits in its encoding. You could give the right answer in only 7 cases and still be right more than 99% of the time, if those seven cases' encodings happened to be 0, 10, 110, ..., 1111110 (for example). So I think this isn't a useful notion of "in most cases" for this problem. -- BenRG 18:47, 30 April 2013 (UTC)
- Interesting, the uncomputability of the constant might present a problem! But seeing as one estimate of the probability is 0.00787499699... then just saying the program halts will be right more than 99% of the time.--Salix (talk): 22:17, 29 April 2013 (UTC)
- For what it's worth, the halting problems for programs that are less than Graham's number bits long or that take less than Graham's number steps to halt or that use less than Graham's number bits of RAM are all computable. That's an infinitesimal fraction of all algorithms, but it might cover what you were thinking of when you said "most cases". However diagonalization implies that the algorithms that solve these problems will be very long, slow, or memory-hungry respectively. -- BenRG 18:47, 30 April 2013 (UTC)