Wikipedia:Reference desk/Archives/Mathematics/2016 May 12
Mathematics desk | ||
---|---|---|
< May 11 | << Apr | May | Jun >> | May 13 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
May 12
[edit]System of Linear Eequations
[edit]Is the following statement true? This seems to me wrong, but I can't find any counterexample...
Statement: Let , and let a system of linear equations such that each equation has exactly k terms where are the variables of these equations . That is, we have a system of linear equations, each of the form: . For example, the system may consist of the equation (here, ).
This system of equations has some solution over iff it has a 0-1 solution (i.e, it has some solution over ). 213.8.204.30 (talk) 06:43, 12 May 2016 (UTC)
- I bet it's true for k=2. For k=3, here is a counter-example:
- a+b+c=1
- (1-a)+d+(1-d)=1
- (1-b)+d+(1-d)=1.
- If you don't like having the same variable twice in a clause, use
- a+b+c=1
- (1-a)+d+(1-e)=1
- (1-b)+d+(1-e)=1
- d+f+g=1
- e+f+g=1
- --JBL (talk) 04:02, 13 May 2016 (UTC)
- It's a great counterexample! Thank you!
- What about the following statement: for , if the system of equations has some solution over , and it doesn't have any 0-1 solution, then there exists some variable , such that in all the solutions over of the system of equations? (that is, if a solution is an assignment to the variables, then in every solution, it holds that xi is assigned the value -1) 213.8.204.65 (talk) 05:07, 13 May 2016 (UTC)
- That is certainly false, since I could switch c for 1-c everywhere. If you replace "-1" by "-1 or 2" then I don't have any intuition about it, unfortunately. JBL (talk) 12:59, 13 May 2016 (UTC)
- Thank you! 213.8.204.65 (talk) 06:43, 14 May 2016 (UTC)
- Ok, here's another example that maybe will make you give up hope ;):
- a+b+w=1
- (1-a)+b+x=1
- a+(1-b)+y=1
- (1-a)+(1-b)+z=1
- JBL (talk) 18:52, 14 May 2016 (UTC)
Numerical recipes for fixed point convergence
[edit]I have a smooth function with at least one fixed point such that . It also has the desirable property that if you take then as for most starting guesses. In other words, it is an attractive fixed point.
My problem is that the function is very computationally expensive to evaluate, requiring about 5 minutes per iteration (using parallel processing on a 16-core workstation). Currently I need to follow the sequence 40 or 50 steps to get a value for within the tolerance that I need. Having this process take a few hours would be "okay" except that I'd like to repeat the process many more times for other variants of . Hence, I'd like to find a way to accelerate the process of finding while hopefully only calling 10 or 15 times. I tried using a canned root finding system (on ) but that converged even more slowly than just following the sequence.
Does anyone have suggestions for numerical recipes to accelerate the process of finding such a fixed point? Though is smooth, it is also safe to say that I don't have any useful prior knowledge of its derivatives, so gradient type methods will have to rely on whatever information can be gained from evaluating . Dragons flight (talk) 15:01, 12 May 2016 (UTC)
- Can you get anything by looking at first differences (or second differences or third...)? Robinh (talk) 19:48, 12 May 2016 (UTC)
As the function is slow to compute, approximate it by some other function which is fast to compute. Bo Jacoby (talk) 08:40, 13 May 2016 (UTC).
Have you tried something from the article about series acceleration? 213.8.204.65 (talk) 09:26, 13 May 2016 (UTC)
- I've since tried implementing Broyden's method. However, I'm not sure if I have done it correctly as it doesn't seem to be improving the convergence right now. Dragons flight (talk) 12:42, 13 May 2016 (UTC)
- I think it depends on how big your N is. If you're talking about R5 then it certainly should be possible to accelerate convergence since after 10 iterations you should be able to estimate the Jacobian pretty accurately. If it's R50 then even if f was linear you wouldn't know the Jacobian until you're already done, but it might be possible to get some useful information about it before then. If R500 then the Jacobian is basically unknown and I think acceleration techniques pretty much rely on it. It also depends on how rapidly the Jacobian changes, otherwise no estimate is going to be useful. Another question is how fast do the iterates converge now; if convergence is already quadratic then you're already doing the best you can with linear methods. One additional thought though, since you're doing this with multiple f's, if it's possible to gauge how similar they are then you might try sequencing them so that similar f's are together, then use the ending point of each f as the starting point for the next one. --RDBury (talk) 14:33, 13 May 2016 (UTC)
- N is roughly 3,000. I haven't systematically tried to characterize the speed of convergence, but it doesn't feel very fast. I often observe values move 1 unit at step n, and then 0.8 to 0.9 units in the same direction at step n+1, which would suggest a fairly slow rate of convergence. As you suggest, I am hopeful that the endpoint of similar f's can be useful for acceleration, but I haven't tried that yet. Dragons flight (talk) 15:18, 13 May 2016 (UTC)
Infinite product
[edit]Let . Does have a closed form? What is ? 24.255.17.182 (talk) 23:22, 12 May 2016 (UTC)
- Some limits, Dragons flight (talk) 13:39, 13 May 2016 (UTC)