Wikipedia:Reference desk/Archives/Mathematics/2023 April 1
Appearance
Mathematics desk | ||
---|---|---|
< March 31 | << Mar | April | May >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 1
[edit]Solving sums of powers
[edit]Consider the equation . The solution, of course, is . What I am trying to understand is how to approach the general case; given , for any known , what is the value of ? I just can't wrap my head around a reasonable approach. I tried taking the natural log on both sides, but that didn't seem to help. Had it been a single term on the left-hand-side of the equation, I could have extracted the exponent and solved it without much fuss. Any suggestions/pointers on how to proceed? Earl of Arundel (talk) 02:17, 1 April 2023 (UTC)
- In the following, I assume that and are all positive. In only a few cases can equations with a sum of powers with the unknown in the exponents be solved easily, by trial and error or analytically. If the solution is given by If we find But there is no analytical way to solve this in general, so we have to resort to a numerical approach. I think Newton's method will do well, using
- and repeatedly computing
- until a desired level of convergence. For the initial estimate, we can use, assuming wlog that
- if
- otherwise.
- Convergence is not guaranteed, though. When and are not at opposite sides of , the equation may not have a solution or may have two solutions. The computation may also fail by dividing by as when If there is an integer solution, it will be found. --Lambiam 07:32, 1 April 2023 (UTC)
- Convergence to high precision may also fail due to the lack of precision in floating point arithmetic. For which is solved by , numerical computation (using exponentiation by ) results in on the dot, but then computing does not result in zero but produces --Lambiam 08:04, 1 April 2023 (UTC)
- In general there are several ways for Newton's method to fail; see the section "Failure analysis" in that article. I think the main point though is that the equation is not solvable in closed form and one must resort to numerical methods to get an approximate value. Of course nowadays you can plug an equation in Wolfram Alpha and get a numerical answer. --RDBury (talk) 17:00, 1 April 2023 (UTC)
- The issue above is not among the failure modes mentioned in that section, which are mathematical in nature. Mathematically, is exactly equal to But the actual computed value is not; we get
- --Lambiam 20:04, 1 April 2023 (UTC)
- The issue above is not among the failure modes mentioned in that section, which are mathematical in nature. Mathematically, is exactly equal to But the actual computed value is not; we get
- In general there are several ways for Newton's method to fail; see the section "Failure analysis" in that article. I think the main point though is that the equation is not solvable in closed form and one must resort to numerical methods to get an approximate value. Of course nowadays you can plug an equation in Wolfram Alpha and get a numerical answer. --RDBury (talk) 17:00, 1 April 2023 (UTC)
- Thanks, Lambiam, I had no idea this was such a difficult problem to solve for the general case! I wonder if there is some way to guarantee convergence (for most cases anyway) by dividing through by C? Playing with that idea I even found an interesting approximation for x:
- Convergence to high precision may also fail due to the lack of precision in floating point arithmetic. For which is solved by , numerical computation (using exponentiation by ) results in on the dot, but then computing does not result in zero but produces --Lambiam 08:04, 1 April 2023 (UTC)
- And since as x approaches infinity, we get a rough estimate:
- In the case of , we find that
- Ignoring extreme corner cases, setting our initial value x0 using this estimated root should yield a numerical convergence in most cases, no?
Earl of Arundel (talk) 18:58, 1 April 2023 (UTC)
- Whoops, that isn't right! Earl of Arundel (talk) 19:14, 1 April 2023 (UTC)
- You can get to faster to your fifth line without dividing by :
- It appears plausible that using gives faster convergence, or, when swapping and and using I have not investigated this, though. If and are similar in size and not at opposite sides of we can approximate by and use
- For the case of , we then get --Lambiam 19:38, 1 April 2023 (UTC)
- Nice! That does seem to work pretty well. Where does the term come from though? It seems to randomly pop up in a lot of equations, come to think of it. Why is that, I wonder? Earl of Arundel (talk) 20:58, 1 April 2023 (UTC)
- Instead of solving we set off by solving
- --Lambiam 21:21, 1 April 2023 (UTC)
- Ah, right. Subtracting has the same effect as dividing by in that context. Thanks again! Earl of Arundel (talk) 21:39, 1 April 2023 (UTC)
- Instead of solving we set off by solving
- Nice! That does seem to work pretty well. Where does the term come from though? It seems to randomly pop up in a lot of equations, come to think of it. Why is that, I wonder? Earl of Arundel (talk) 20:58, 1 April 2023 (UTC)