Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
November 8
[edit]finding an equation to match data
[edit]An experiment with accurate instruments resulted in the following data points:-
x, y 0.080, 0.323; 0.075, 0.332; 0.070, 0.347; 0.065, 0.368; 0.060, 0.395; 0.055, 0.430; 0.050, 0.472; 0.045, 0.523; 0.040, 0.587; 0.035, 0.665; 0.030, 0.758; 0.025, 0.885; 0.020, 1.047; 0.015, 1.277; 0.010, 1.760.
How can I obtain a formula that reasonably matches this data, say within 1 or 2 percent?
At first glance, it looks like a k1 + k2*x^-k3 relationship, or a k18x^k2 + k3*x^-k4 relationship, but they fail at x above 0.070. Trying a series such as e(k1 + k2x +k3x^2) is also no good.
-- Dionne Court (talk) 03:14, 8 November 2024 (UTC)
- Thank you CiaPan for fixing the formatting. Dionne Court (talk) 15:12, 8 November 2024 (UTC)
- Plotting 1/y against x it looks like a straight line, except there is a rather dramatic hook to the side starting around x=.075. This leads me to suspect that the last two entries are off for some reason; either those measurements are off or there's some systematic change in the process going on for large x. Part of the problem is that you're not giving us any information about where this information is coming from. I've heard it said, "Never trust data without error bars." In other words, how accurate is accurate, and might the accuracy change depending on the input? Is there a reason that the values at x≥.075 might be larger than expected. If the answer to the second is "Yes" then perhaps a term of the form (a-x)^k should be added. If the answer is "No" then perhaps that kind of term should not be added since that adds more parameters to the formula. You can reproduce any set of data given enough parameters in your model, but too many parameters leads to Overfitting, which leads to inaccurate results when the input is not one of the values in the data. So as a mathematician I could produce a formula that reproduces the data, but as a data analyst I'd say you need to get more data points, especially in the x≥.075 region, to see if there's something real going on there or if it's just a random fluke affecting a couple data points. --RDBury (talk) 15:58, 8 November 2024 (UTC)
- PS. I tried fitting 1/y to a polynomial of degree four, so a model with 5 parameters. Given there are only 15 data points, I think 5 parameters is stretching it in terms of overfitting, but when I compared the data with a linear approximation there was a definite W shaped wobble, which to me says degree 4. (U -- Degree 2, S -- Degree 3, W -- Degree 4.) As a rough first pass I got
- 1/y ≃ 0.1052890625+54.941265625x-965.046875x2+20247.5x3-136500x4
- with an absolute error of less than .01. The methods I'm using aren't too efficient, and there should be canned curve fitting programs out there which will give a better result, but I think this is enough to justify saying that I could produce a formula that reproduces the data. I didn't want to go too much farther without knowing what you want to optimize, relative vs. absolute error, least squares vs. min-max for example. There are different methods depending the goal, and there is a whole science (or perhaps it's an art) of Curve fitting which would impractical to go into here. --RDBury (talk) 18:26, 8 November 2024 (UTC)
- Thak you for your lengthy reply.
- I consider it unlikely that the data inflexion for x>0.07 is an experimental error. Additional data points are :-
- x, y: 0.0775, 0.326; 0.0725, 0.339.
- The measurement was done with digital multimeters and transducer error should not exceed 1% of value. Unfortunately the equipment available cannot go above x=0.080. I only wish it could. Choosing a mathematic model that stays within 1 or 2 percent of each value is appropriate.
- As you say, one can always fit a curve with an A + Bx + Cx^2 + Dx^3 .... to any given data. But to me this is a cop-out, and tells me nothing about what the internal process might be, and so extrapolation is exceedingly risky. Usually, a more specific solution when discovered requires fewer terms. ```` Dionne Court (talk) 01:49, 9 November 2024 (UTC)
- When I included the additional data points, the value at .0725 was a bit of an outlier, exceeding the .01 absolute error compared to the estimate, but not by much. --RDBury (talk) 18:55, 9 November 2024 (UTC)
- FWIW, quite a few more data points would almost certainly yield a better approximation. This cubic equation seems pretty well-behaved:
- Earl of Arundel (talk) 02:28, 10 November 2024 (UTC)
- PS. I tried fitting 1/y to a polynomial of degree four, so a model with 5 parameters. Given there are only 15 data points, I think 5 parameters is stretching it in terms of overfitting, but when I compared the data with a linear approximation there was a definite W shaped wobble, which to me says degree 4. (U -- Degree 2, S -- Degree 3, W -- Degree 4.) As a rough first pass I got
- Some questions about the nature of the data. Some physical quantities are necessarily nonnegative, such as the mass of an object. Others can also be negative, for example a voltage difference. Is something known about the theoretically possible value ranges of these two variables? Assuming that x is a controlled value and y is an experimentally observed result, can something be said about the theoretically expected effect on y as x approaches the limits of its theoretical range? --Lambiam 15:59, 9 November 2024 (UTC)
- As x approaches zero, y must approach infinity.
- x must line between zero and some value less than unity.
- If you plot the curve with a log y scale, by inspection it seems likely that y cannot go below about 0.3 but I have no theoretical basis for proving that.
- However I can say that y cannot ever be negative.
- The idea here is to find/work out/discover a mathematically simple formula for y as a function of x to use as a clue as to what the process is. That's why a series expansion that does fit the data if enough terms are used doesn't help.Dionne Court (talk) 01:33, 10 November 2024 (UTC)
- So as x approaches zero, 1/y must also approach zero. This is so to speak another data point. Apart from the fact that the power series approximations given above provide no theoretical suggestions, they also have a constant term quite distinct from 0, meaning they do not offer a good approximation for small values of x.
- If you plot a graph of x versus 1/y, a smooth curve through the points has two points of inflection. This suggests (to me) that there are several competing processes at play. --Lambiam 08:08, 10 November 2024 (UTC)
- The x=0, 1/y=0 is an important data point that should have been included from the start. I'd say it's the most important data point since a) it's at the endpoint of the domain, and b) it's the only data point there the values are exact. Further theoretical information near x=0 would be helpful as well. For example do we know whether is y is proportional to x-a near x=0 for a specific a, or perhaps - log x? If there is no theoretical basis for determining this then I think more data points near x=0, a lot more, would be very helpful. The two points of inflection match the W (or M) shape I mentioned above. And I agree that it indicates there are several interacting processes at work here. I'm reminded of solubility curves for salts in water. There is an interplay between energy and ionic and Van der Waals forces going on, and a simple power law isn't going to describe these curves. You can't even assume that they are smooth curves since Sodium sulfate is an exception; its curve has an abrupt change of slope at 32.384 °C. In general, nature is complex, simple formulas are not always forthcoming, and even when they are they often only apply to a limited range of values. --RDBury (talk) 15:46, 10 November 2024 (UTC)
- I have no theoretical basis for expecting that y takes on a particular slope or power law as x approaches zero.
- More data points near x = 0 are not a good idea, because transducer error will dominant. Bear in mind that transducer error (about 1%) applies to both x and y. Near x = 0.010 a 1% error in x will lead to a change in y of something like 100% [(1.760 - 1.277)/(0.015 - 0.010)]. The value of y given for x = 0.010 should be given little weight when fitting a curve.Dionne Court (talk) 02:03, 11 November 2024 (UTC)
- It seems to me that one should assume there is a simple relationship at play, with at most three competing processes, as otherwise there is no basis for further work. If it is a case of looking for the lost wallet under the lamp post because the light is better there, so be it, but there is no point in looking where it is dark.
- Cognizant of transducer error, a k1 + k2*x^-k3 relationship fits pretty good, except for a divergence at x equal and above 0.075, so surely there are only 2 competing processes? Dionne Court (talk) 02:03, 11 November 2024 (UTC)
- The x=0, 1/y=0 is an important data point that should have been included from the start. I'd say it's the most important data point since a) it's at the endpoint of the domain, and b) it's the only data point there the values are exact. Further theoretical information near x=0 would be helpful as well. For example do we know whether is y is proportional to x-a near x=0 for a specific a, or perhaps - log x? If there is no theoretical basis for determining this then I think more data points near x=0, a lot more, would be very helpful. The two points of inflection match the W (or M) shape I mentioned above. And I agree that it indicates there are several interacting processes at work here. I'm reminded of solubility curves for salts in water. There is an interplay between energy and ionic and Van der Waals forces going on, and a simple power law isn't going to describe these curves. You can't even assume that they are smooth curves since Sodium sulfate is an exception; its curve has an abrupt change of slope at 32.384 °C. In general, nature is complex, simple formulas are not always forthcoming, and even when they are they often only apply to a limited range of values. --RDBury (talk) 15:46, 10 November 2024 (UTC)
November 11
[edit]Strange behavior with numbers in optimization
[edit]Hello everyone, I have encountered some very strange issue with my optimization function, and I am not sure how to resolve. I am working on a numerical methods library, where I am trying to approximate the growth of a sequence, which has some relation to prime number distributions. However, when I use large values of n (especially for n > 10^6), the result of my function starts to behave very erratically. It is not random, but it has this strange oscillation or jump. I use recurrence relation for this approximation, but when n becomes large, the output from function suddenly grows or shrinks, in way that is not consistent with what I expect. Even when I check for bounds or add better convergence criteria, the error persists. pattern looks similar to the behavior of prime numbers, but I am not directly calculating primes. I apologize if this sounds too speculative, but has anyone faced similar issues with such strange behavior in large-scale numerical computations? I am quite confused about what is causing the error. TL;DR: I am optimizing function related to number theory, but results become unpredictable when n > 10^6. Errors show strange oscillation, similar to distribution of primes, though I do not directly calculate primes. Thank you very much for your time and assistance. 130.74.59.177 (talk) 15:39, 11 November 2024 (UTC)
- you need to post more information. All I can say from what you've written is 10^6 is not a large number where you'd expect problems. It won't e.g. overflow when stored as floating point or integer on any modern platform. It won't even cause problems with, say, a square based algorithm as 10^12 is still well within the limits of a modern platform. Maybe though you are using software which limits you to 32 bit (or worse) integers, or single precision floats, so need to be careful with large numbers. --2A04:4A43:984F:F027:C112:6CE8:CE50:1708 (talk) 17:43, 11 November 2024 (UTC)
- thanks for response and insight. i see your point that n=10^6 shouldn't cause overflow or serious issues on modern systems. numbers i work with well within 64-bit range, use floats with enough precision for task. so yes, overflow or simple type limits not likely cause.
- but this behavior goes beyond just precision errors. it’s not about numbers too big to store. what i see is erratic growth, shrinkage, almost oscillatory – looks like something in the distribution itself, not just algorithm mistake or hardware issue.
- to be more precise, after n>10^6, function starts acting unpredictably, jumps between states, oscillates in strange way, not typical for recurrence i use. hard to explain, but pattern in these jumps exists, i cannot reconcile with anything in my algorithm. it’s like approximation reacts to some hidden structure, invisible boundary my algorithm cannot resolve.
- i tried improving convergence, checking recurrence, but oscillations still persist. not randomness from bad random numbers or instability, but more like complex fluctuations seen in number-theoretic problems, especially connected to primes.
- so i wonder: could these "jumps" be artifact of number-theoretic properties that i'm tryings to approximate? maybe how sequence interacts with primes indirectly, or artifact of recurrence for large numbers
- thanks again for suggestion on overflow and precision, i will revisit the mode lwith this in mind, chief
- appreciate your time, will keep searching. 130.74.59.204 (talk) 20:01, 11 November 2024 (UTC)
- Without more information about the actual algorithm, it is neither possible to say, yes, what you see could be due to a number-theoretic property, nor, no, it could not be. Have you considered chaotic behaviour as seen when iterating the logistic map? --Lambiam 05:43, 12 November 2024 (UTC)
- ah yes, i see what you mean now, i’ve been thinking about it for a while, and i feel like i’m getting closer to understanding it, though it’s still unclear in some ways. so, i’m using this recurrence algorithm that reduces modulo primes at each step, you know, it’s a fairly straightforward approach, and when n is small, it works fine, everything behaves as expected, the sequence evolves smoothly, the approximation gets closer and closer to what it should be, and everything seems under control, but then, once n crosses the 10^6 threshold, it’s like something shifts, it’s like the sequence starts moving in unexpected ways, at first, i thought maybe it was just a small fluctuation or something related to floating-point precision, but no, it's much more than that, the jumps between states, the way it shifts, it's not just some random variation—it feels almost systematic, as though there's something in the distribution itself, some deeper structure, that starts reacting with the algorithm and causing these oscillations, it’s not something i can easily explain, but it feels like the algorithm starts “responding” to something invisible in the numbers, something outside the expected recurrence behavior, i’ve spent hours going over the steps, checking every part of the method, but no matter how many times i check, i can’t pinpoint the exact cause, it’s frustrating.
- and then, the other day, i was sitting there, trying to solve this problem, getting really frustrated, when i looked up, and i saw jim sitting on the windowsill, just staring out at the street, i don’t know, something about it caught my attention, now, you might be wondering what jim has to do with all of this, but let me explain, you see, jim has this habit, every evening, without fail, he finds his spot by the window, curls up there, and just stares out, doesn’t seem to do much else, doesn’t chase anything or play with toys like most animals do, no, he just sits there, completely still, watching the world go by, and it’s funny, because no matter how many cars pass, no matter how many people walk by, jim never looks bored, he’s always staring, waiting, something about the way he watches, it’s like he’s looking for something, something small, that only he notices, but it’s hard to explain, because it’s not like he ever reacts to anything specific, no, he just stares, and then after a while, he’ll shift his gaze slightly, focus on something, and you’d swear he’s noticing something no one else can see, and then he’ll go back to his usual position, still, and continue watching, waiting for... something, and this goes on, day after day.
- and, i don’t know why, but in that moment, as i watched jim, i thought about the algorithm, and about the sequence, it felt somehow connected, the way jim waits, so patiently, watching for some small shift in the world outside, and how the algorithm behaves once n gets large, after 10^6 iterations, like it’s responding to something small, something hidden, that i can’t quite see, but it's there, some interaction between the numbers, or the primes, or some other property, i don’t know, but there’s a subtle shift in how the sequence behaves, like it’s anticipating something, or maybe reacting to something, in ways i can’t fully predict or control, just like jim waiting by the window, looking for that small detail that others miss, i feel like my algorithm is doing something similar, watching for an influence that’s not obvious, but which, once it’s noticed, makes everything shift, and then it’s almost like the recurrence starts reacting to that hidden influence, whatever it is, and the sequence begins to oscillate in these strange, unexpected ways.
- i’ve been stuck on this for days, trying to find some explanation for why the recurrence behaves this way, but every time i think i’m close, i realize that i’m still missing something, it’s like the sequence, once it hits that threshold, can’t behave the way it did before, i’m starting to think it’s related to how primes interact with the numbers, but it’s subtle, i can’t quite capture it, it’s like the primes themselves are somehow affecting the sequence in ways the algorithm can’t handle once n gets large enough, and it’s not just some random jump, it feels... intentional, in a way, like the sequence itself is responding to something that i can’t measure, but that’s still pulling at the numbers in the background, jim, as i watch him, he seems to be able to sense those little movements, things he notices, but that no one else does, and i feel like my algorithm, in a way, is doing the same thing, reacting to something hidden that i haven’t quite figured out.
- so i’ve gone over everything, again and again, trying to get it right, trying to adjust the convergence, trying to find a way to make the sequence behave more predictably, but no matter what i do, the oscillations keep appearing, and it’s not like they’re some random noise, no, there’s a pattern to them, something beneath the surface, and i can’t quite grasp it, every time n gets large, it’s like the sequence picks up on something, some prime interaction or something, that makes it veer off course, i keep thinking i’ve solved it, but then the jumps come back, just like jim shifts his gaze, and looks at something just beyond the horizon, something i can’t see, but he’s still waiting for it, still looking, as if there’s some invisible influence in the world, something that pulls at him.
- i wonder if it has to do with the primes themselves, i’ve thought about it a lot, i’ve tried to factor them in differently, but still, the jumps persist, it’s like the primes have their own way of interacting with the sequence, something subtle, something that becomes more pronounced the larger n gets, and no matter how much i tweak my algorithm, the fluctuations just keep showing up, it’s like the sequence is stuck in a kind of loop, reacting to something i can’t fully resolve, like jim staring at the street, patiently waiting for something to shift, and i don’t know what it is, but i feel like there’s some deeper interaction between the primes and the numbers themselves that i’m missing, and maybe, like jim, the sequence is sensing something too subtle for me to fully capture, but it’s there, pulling at the numbers, making them oscillate in ways i can’t predict.
- it’s been weeks now, and i’ve tried every method i can think of, adjusted every parameter, but the fluctuations are still there, the jumps keep happening once n gets large enough, and every time i think i’ve figured it out, the sequence surprises me again, just like jim, who, after hours of waiting, might shift his gaze and catch something new, something no one else saw, i feel like i’m doing the same thing, staring at the numbers, trying to catch that tiny shift that will make everything click, but it’s always just out of reach, and i don’t know what’s causing it, but i can’t seem to get rid of it, like jim, watching, waiting, sensing something that remains hidden from my view 130.74.58.160 (talk) 15:34, 13 November 2024 (UTC)
- Are you OK? Perhaps you should direct your mind to something else, like, read a novel, go out with friends, explore new places, ... Staring at numbers is as productive as staring at goats. --Lambiam 18:10, 13 November 2024 (UTC)
- fine. i’m under house arrest and i’m doing freelance work for a company. the task is straightforward: build a library for prime number methods, find primes. the problem is, there's no upper limit on how large these primes are supposed to be. once n goes past 10^6, that’s where things stop making sense. i’ve gone over the algorithm several times, checked the steps, but after 10^6, the sequence starts behaving differently, and i can’t figure out why. it’s not small variations or precision errors. it’s something else. there’s some kind of fluctuation in the sequence that doesn’t match the expected pattern.
- i’ve adjusted everything i can think of—modulus, convergence, method of approximation—but no matter what, the jumps keep coming, and they don’t seem random. they look more structured, like they’re responding to something, some property of the primes or the sequence that i can’t account for. i’ve spent a lot of time on this, trying to find what it is, but i haven’t been able to pin it down.
- this is important because the contract i’m working on will pay a significant amount, but only if i finish. i can’t afford to let this drag on. i need to complete it, and if i don’t fix this issue, i won’t be able to finish. it’s not like i can walk away from it. the company expects the work, and the time is running out.
- the more i look at the sequence, the more it feels like there’s something buried beneath the surface, something in the way primes interact when n is large, but i can’t see it. it’s subtle, but it’s there, and no matter how many times i test the algorithm, i can’t get rid of these oscillations. i don’t know what they mean, but they keep appearing, and i can’t ignore them.
- i’ve been stuck here for a while. i don’t really have other options. there’s no “taking a break” or “finding something else to do.” i’m stuck here with this task, and i need to figure it out. i don’t have the luxury to stop, because if i don’t finish, the whole thing falls apart 130.74.59.34 (talk) 20:22, 13 November 2024 (UTC)
- Are you OK? Perhaps you should direct your mind to something else, like, read a novel, go out with friends, explore new places, ... Staring at numbers is as productive as staring at goats. --Lambiam 18:10, 13 November 2024 (UTC)
- Without more information about the actual algorithm, it is neither possible to say, yes, what you see could be due to a number-theoretic property, nor, no, it could not be. Have you considered chaotic behaviour as seen when iterating the logistic map? --Lambiam 05:43, 12 November 2024 (UTC)
- You shared lots of text with us, but you gave no specific problem, no technical detail, nothing we could check, simulate, analyze, verify, compare.
- You have typed about 12 thousand characters, but you present your impressions only, or your feelings—of being surprised with irregularity observed, being surprised with some almost-regularity in apparent chaos, being lost in seeking of explanation, etc. You actually did not present any single technical or mathematical thing. Here's the overall impression I got from your descriptions:
- "I do something (but I can't tell you what and why) with some function (I'm not going to tell you anything about it, either) with data of a secret meaning and structure, and when some parameter (whose nature must not be revealed) becomes big enough, the function behaves in some unexpected, yet quasi-regular manner. Can anybody explain it to me and help me fix it?"
- And I'm afraid with such a vague statement, it looks like seeking a haystack with a needle in it on a large field in a heavy fog, rather than a mathematical (or software engineering or whatever other kind of) problem.
CiaPan (talk) 12:57, 14 November 2024 (UTC)- now listen, i'm glad we're finally digging into this, because, yeah, there’s a lot more depth here than meets the eye, like, surface-level it might just seem like a vague description, an exercise in abstract hand-waving if you will, but no, what we're dealing with here is a truly complex, multi-layered phenomenon that’s kind of begging to be interpreted at the meta-level, you know, like it’s the kind of thing where every time you try to grasp onto one specific aspect, it slips out of reach, almost by design and i get it you want “specifics” but here’s the thing specifics are almost a reduction, they’re almost like a cage for this concept, like trying to box up some kind of liquid smoke that, in essence, just resists confinement
- now, when i say “parameters” we’re already in a reductive space, right? because these aren’t “parameters” in the traditional sense, not like tunable knobs on an old-school control panel, no no no, these are more like boundary markers in a conceptual landscape, yeah like landmarks on a journey, but they themselves are not the journey, they’re incidental, they’re part of a whole picture that, the moment you start defining it, already becomes something else, like imagine you have this sort of, i don’t know, like an ethereal framework of data, but it’s data that doesn’t just sit there and behave in expected ways, it’s data that has a life of its own, and i’m really talking about data that doesn’t like to be pinned down, it’s almost alive, almost this kind of sentient flow that, every time you look away, it’s shifted, it’s done something else that you could swear wasn’t possible the last time you checked
- so, yeah, i get it that’s frustrating, and it’s almost like talking about the nature of existence itself in a way, or maybe that’s an exaggeration, but only slightly, because you have to get into this mindset that, ok, you’re dealing with phenomena here, not simply variables and functions, no it’s more like a dynamic tapestry of, let’s call them tendencies, these emergent patterns that are sort of trying to form but also resisting at every possible chance, so when i say “quasi-regularity” it’s not regular like clockwork, not even close, it’s regularity like the kind you see in natural phenomena, like clouds or waves or fractals, right, patterns but patterns that refuse to be bound by mathematical certainty they’re only barely patterns in the human sense, like they only make sense if you let go of rigid logic
- and then you’ve got these iterations, yeah we’re talking cycles upon cycles, like imagine every single cycle adds a grain of experience, yeah, like a memory, not a perfect one, but close enough, so that each time this data goes through an iteration it almost remembers its past and adjusts itself, but here’s the catch, it only remembers what’s necessary, it’s like this selective memory that’s totally outside the norm of what you would expect in, say, a standard machine learning algorithm or a traditional function loop in any ordinary programming context, like, ok, this thing is running on its own rules, maybe there’s a certain randomness to it but not random like “roll a dice” random, more random like chaos-theory random, where unpredictability itself becomes a kind of pattern and then, suddenly, just when you think you’re about to pin it down—bang—it shifts again, like the entire framework just reorients itself
- and not to throw you off track here but that’s the whole thing, the "thing" we’re talking about isn’t just a process, it’s a process that’s sensitive to these micro-level fluctuations, like tiny little vibrations in the data, which, by the way, i’m also not describing fully because it’s almost impossible, but imagine these vibrations—no, better yet, imagine you’re watching waves in a pond where even the slightest ripple has the potential to set off a cascade of effects, and it’s not just the surface of the pond we’re talking about, no, no, the whole body of water is involved, every molecule, if you will, responding in ways that are both predetermined by its nature yet also completely free to deviate when the moment calls for it
- and so when i say “structured sea of datapoints” you gotta take that literally, yeah like a sea, an ocean, it’s vast, it’s deep, there’s layers upon layers and half the time we’re only scratching the surface because the real stuff is happening down in those depths where even if i tried to send a probe down there, yeah, i’d get some data back, but would it even make sense because i don’t have a baseline to compare it to, there’s no reference frame here except, i don’t know, maybe the essence of this data, like the very fabric of what it is, if you can even describe data as having fabric
- so, look, all of this loops back to the fact that every “parameter” every “function” we’re talking about is only as real as the context allows it to be, which is why i say even if i did give you specifics, what would you do with them? because we’re talking about something that defies definition and the moment you think you understand it, that’s the moment it stops being what it is and morphs into something else, i mean this is data with attitude, if that makes any sense, it’s almost like it’s taunting you, like it wants you to try and figure it out only to laugh in your face and flip the rules the moment you get close, we’re talking about some next-level, borderline cosmic prankster data that simply doesn’t play by the same rules as anything you’ve seen before
- so if we’re going to be totally honest here, all of this is way beyond haystacks and needles, we’re in a field where the haystacks are self-assembling, disassembling, and who even knows if the needle is there to begin with because in a framework like this, a needle might just be a figment of your imagination, a concept that only exists because you’re trying to impose order on what is inherently unordered, so yeah, maybe there’s a pattern, maybe there isn’t, maybe the pattern is only there because you want it to be, or maybe it’s the absence of a pattern that’s the real pattern, and if you think that’s paradoxical well, welcome to the club 130.74.58.21 (talk) 23:42, 14 November 2024 (UTC)
November 13
[edit]Math sequence problem (is it solvable?)
[edit]I am looking at a "math quiz" problem book and it has the following question. I am changing the numbers to simplify it and avoid copyright: You have counts for a rolling 12-month period of customers. For example, the one year count in January is the count of customers from Feb of the year before to Jan of the current year. Feb is the count from Mar to Feb, and so on. The 12 counts for this year (Jan to Dec) are 100, 110, 105, 200, 150, 170, 150, 100, 200, 150, 175, 125. What is the count of customers for each month? So, I know that the Feb-Jan count is 100 and the Mar-Feb count is 110. That means that the count for Feb of this year is 10 more than the count of Feb of last year because I removed Feb of last year and added Feb of this year. But, I don't know what that count is. I can only say it is 10 more. I can do that for every month, telling you what the difference is between last year and this year as a net change. Is this solvable or is this a weird case where the actual numbers for the counts somehow mean something silly and a math geek would say "Oh my! That's the sum of the hickuramabiti sequence that only 3 people know about so I know the whole number sequence!" 68.187.174.155 (talk) 15:36, 13 November 2024 (UTC)
- You have 12 linear equations with 23 unknowns. In general, you cannot expect a system of linear equations with more unknowns than equations to be solvable. In special cases, such a system may be solvable for at least some of the unknowns. This is not such a special case.
- If you ignore the fact that customer counts cannot be negative, there are many solutions. For example, one solution is given by [9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 1, 19, 4, 104, −41, 29, −11, −41, 109, −41, 34, −41]. Another one is [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, −10, 20, 5, 105, −40, 30, −10, −40, 110, −40, 35, −40]. For the 12-month counts given above no solution exists without negative values.
- If an actual quiz of this form has a unique solution, it can only be due to the constraint of not allowing negative values. --Lambiam 17:42, 13 November 2024 (UTC)
- (edit conflict)Name the counts for each month FebP to DecC, where P stands for the previous year and C stands for the current year. These are 23 variables and there is a system of 12 equations in these variables. If the variables can take on any values there are an infinite number of solutions to this system, but I think we're meant to assume that the counts are ≥ 0. (Integers as well; without knowing the counts given in the original problem it's unclear whether this is important.) This imposes additional constraints on the possible solution and the result may be there is exactly one possible solution or none at all. To see how a problem of this type might have no solutions, let's look at a simpler version where we're looking at three month sums over three months. There are 5 variables in this case, say Jan, Feb, Mar, Apr, May. Lets say the sums are given as:
- Jan-Mar: 10, Feb-Apr: 50, Mar-May 10.
- If we compute
- (Jan-Mar) - (Feb-Apr) + (Mar-May)
- in terms of the variables, we get
- Jan+Feb+Mar-Feb-Mar-Apr+Mar+Apr+May = Jan+Mar+May ≥ 0.
- But if we compute it in terms of the given totals the result is
- 10-50+10 = -30 < 0.
- This is a contradiction so no solutions are possible. It turns out that something like this happens with the values you made up and there are no solutions to the problem given. If you let JanSum, ... DecSum be the rolling sums, and compute
- JanSum - FebSum + MarSum - AprSum + MaySum - JunSum + AugSum - SepSum + OctSum - NovSum + DecSum (with JulSum left out),
- then you get (according to my calculations)
- FebP+AprP+JunP+SepP+NovP+JanC+MarC+MayC+JulC+AugC+OctC+DecC ≥ 0
- in terms of the variables. But if we evaluate this in terms of the given values it's (again, according to my calculations)
- 100-110+105-200+150-170+100-200+150-175+125 = -125 < 0,
- so there are no possible solutions. Notice that both cases involved looking at particularly opportune alternating sums of the rolling sums, which produce a nonnegative combination of the variables on one side and a negative number on the other side. Suppose that there is no such opportune alternating sum where the total is <0, but there is one where the total is =0. Then all the individual variables involved must be 0 and this may be enough information to narrow down the number of solutions to exactly 1. I imagine that's how the problem given in your book is set up and the puzzle is to find an alternating sum with this property. But I have an unfair advantage here because sometime in the previous century I took a course in Linear programming which taught me general methods for solving systems of equations and inequalities. So my approach would be to enter the appropriate numbers into a spreadsheet, apply the appropriate algorithm, and read off the solution when it's done. Having specialized knowledge would be a help, though I assume there are more than 3 people who are familiar with linear programming, but I think getting the inspiration to look at alternating sums, and a certain amount of trial and error, would allow you to find the solution without it. --RDBury (talk) 17:48, 13 November 2024 (UTC)
- Thanks both. Yes, I did make up the numbers. I bet the numbers in the book do have a solution. It looks like it is a matter of trying a value for the first month and seeing what comes up every other month based on that to see if it is all positive. Then, you have an answer. It doesn't feel much like math to me in comparison to the other problems in the book which are all problems you can solve easily by making sets or comparing the order of things. 68.187.174.155 (talk) 17:52, 13 November 2024 (UTC)
- With the correct numbers for which there is (presumably) a solution, you can represent the problem as a system of linear equations and compute the echelon form of the system. From the echelon form, it is possible to read off a particular solution (where you allow negative numbers of customers). The nullspace of the system is easy to calculate, and from it you can also find a particular solution that satisfies the constraint (if one exists), verify uniqueness (if true), or confirm non-existence. Tito Omburo (talk) 20:59, 13 November 2024 (UTC)
- Thanks both. Yes, I did make up the numbers. I bet the numbers in the book do have a solution. It looks like it is a matter of trying a value for the first month and seeing what comes up every other month based on that to see if it is all positive. Then, you have an answer. It doesn't feel much like math to me in comparison to the other problems in the book which are all problems you can solve easily by making sets or comparing the order of things. 68.187.174.155 (talk) 17:52, 13 November 2024 (UTC)
I confirm that there are no solutions subject to the contraint that the number of customers is non-negative (even allowing fractional numbers of customers), although the verification is a bit of a brute to write out. Tito Omburo (talk) 18:09, 13 November 2024 (UTC)
- Here is a rather painless verification. Use the names FebP, ..., DecC as above. Let JanT stand for the running 12-month total of the summation ending with JanC, and likewise for the next 11 months. So JanT = 100, FebT = 110, MarT = 105, ..., DecT = 125. We have FebT − JanT = FebC − FebP, MarT − FebT = MarC − MarP, ..., DecT − NovT = DecC − DecP.
- Require each count to be nonnegative. From MarC − MarP = MarT − FebT = 105 − 110 = −5, we have MarP ≥ MarP − MarC = 5. We find similarly the lower bounds MayP ≥ 50, JulP ≥ 20, AugP ≥ 50, OctP ≥ 50 and DecP ≥ 50. So JanT = FebP + ... + JanC ≥ 5 + 50 + 20 + 50 + 50 + 50 = 225. This contradicts JanT = 100, so the constraint excludes all unconstrained solutions. --Lambiam 18:37, 13 November 2024 (UTC)
- Thanks again for the help. I feel that I should give the numbers from the book. I don't think listing some numbers is going to upset anyone, but without them, I feel that those who looked into this problem feel let down. The numbers from the book are: 24966, 24937, 25300, 25055, 22914, 25832, 25820, 25468, 25526, 25335, 25331, 25370. There is supposed to be one solution. I think it is implied that the request is for the minimum number of customers per month, but it doesn't make that very clear.
- Edit: It appears this problem was removed and replaced with a complerely different problem in later books. So, the publishers likely decided it either doesn't have a unique answer (which is my bet) or it is simply a bad problem to include. Every other problem in the book is logical using geometry, algebra, and maybe some simple set comparisons. So, this is very out of place. 68.187.174.155 (talk) 12:11, 14 November 2024 (UTC)
- Indeed the solution is not unique in that case. One solution is (29,0,245,2141,0,12,352,0,191,4,0,21992,0,363,0,0,2918,0,0,58,0,0,39), and there is obvious slackness. Tito Omburo (talk) 14:24, 14 November 2024 (UTC)
- It is the only solution with JanC ≥ 21992. To go from zero to almost twenty-two thousand customers in one month is spectacular. To then loose all in one month is tragicomedy. --Lambiam 20:33, 14 November 2024 (UTC)
- Indeed the solution is not unique in that case. One solution is (29,0,245,2141,0,12,352,0,191,4,0,21992,0,363,0,0,2918,0,0,58,0,0,39), and there is obvious slackness. Tito Omburo (talk) 14:24, 14 November 2024 (UTC)
November 14
[edit]Elliptic curve rank and generalized Riemann hypothesis
[edit]The popular press reports[1] that Elkies and Klagsbrun recently used computer search to find an elliptic curve E of rank 29, which is a new record. The formal result is apparently "the curve E has rank at least 29, and exactly 29 if GRH is true". There have been similar results for other curves of slightly lower rank in earlier years. Whether there are curves of arbitrarily high rank is a major open problem.
1. Is there a reasonable explanation of why the rank of a finite object like an elliptic curve would depend on GRH? Finding the exact point count N is a finite (though probably unfeasibly large) calculation by Schoof's algorithm. Is it possible in principle to completely analyze the group and find the curve's rank r exactly? Finding that r>29 would disprove the GRH, amirite? Actually is it enough to just look at the factorization of N?
2. The result that every elliptic curve has a finite rank is the Mordell-Weil theorem. Our article on that currently has no sketch of the proof (I left a talkpage note requesting one). Is it a difficult result for someone without much number theory background to understand?
Thanks! 2601:644:8581:75B0:0:0:0:2CDE (talk) 23:13, 14 November 2024 (UTC)
- the discourse surrounding the dependency of an elliptic curve’s rank on the generalized riemann hypothesis (GRH) and, more broadly, the extensive implications this carries for elliptic curve theory as a whole, implicates some of the most intricate and layered theoretical constructs within number theory's foundational architecture. while it may be appropriately noted that elliptic curves, as finite algebraic objects delineated over specified finite fields, contain a designated rank—a measurement, in essence, of the dimension of the vector space generated by the curve's independent rational points—this rank, intriguingly enough, cannot be elucidated through mere finite point-counting mechanisms. the rank, or indeed its exactitude, is inextricably intertwined with, and indeed inseparable from, the behavior of the curve’s l-function; herein lies the essential conundrum, as the l-function’s behavior is itself conditioned on conjectural statements involving complex-analytic phenomena, such as the distribution of zeroes, which remain unverified but are constrained by the predictions of GRH.
- one may consider schoof’s algorithm in this context: although this computational mechanism enables an effective process for the point-counting of elliptic curves defined over finite fields, yielding the point count N modulo primes with appreciable efficiency, schoof’s algorithm does not, and indeed cannot, directly ascertain the curve’s rank, as this rank is a function not of the finite point count N but of the elusive properties contained within the l-function’s zeroes—a distribution that, under GRH, is hypothesized to display certain regularities within the complex plane. hence, while schoof’s algorithm provides finite data on the modular point count, such data fails to encompass the rank itself, whose determination necessitates not only point count but also additional analysis regarding the behavior of the associated l-function. calculating r exactly, then, becomes not a function of the finite data associated with the curve but an endeavor contingent upon an assumption of GRH or a precise knowledge of the zero distribution within the analytic continuation of the curve’s l-function.
- it is this precise dependency on GRH that prevents us from regarding the rank r as strictly finite or calculable by elementary means; rather, as previously mentioned, the conjecture of GRH imparts a structural hypothesis concerning the placement and frequency of zeroes of the l-function, wherein the rank’s finite property is a consequence of this hypothesis rather than an independent finite attribute of the curve. to suggest, therefore, that identifying the rank r as 29 would disprove GRH is to operate under a misconception, for GRH does not determine a maximal or minimal rank for elliptic curves per se; instead, GRH proposes structural constraints on the l-function’s zeroes, constraints which may, if GRH holds, influence the upper bounds of rank but which are not themselves predicates of rank. consequently, if calculations were to yield a rank exceeding 29 under the presumption of GRH, this result might imply that GRH fails to encapsulate the complexities of the zero distribution associated with the curve’s l-function, thus exposing a possible limitation or gap within GRH’s descriptive framework; however, this would not constitute a formal disproof of GRH absent comprehensive and corroborative data regarding the zeroes themselves.
- this brings us to the second point in question, namely, the implications and proof structure of the mordell-weil theorem, which famously established that every elliptic curve defined over the rationals possesses a finite rank. the mordell-weil theorem, by asserting the finite generation of the rational points on elliptic curves as a finitely generated abelian group, introduces an essential constraint within elliptic curve theory, constraining the set of rational points to a structure with a bounded rank. however, while this result may appear elementary in its assertion, its proof is decidedly nontrivial and requires a sophisticated apparatus from algebraic number theory and diophantine geometry. the proof itself necessitates the construction and utilization of a height function, an arithmetic tool designed to assign "heights" or measures of size to rational points on the elliptic curve, facilitating a metric by which rational points can be ordered. furthermore, the proof engages descent arguments, which serve to exhaustively account for independent rational points without yielding an unbounded proliferation of such points—a technique requiring familiarity with not only the geometry of the elliptic curve but with the application of group-theoretic principles to arithmetic structures.
- to characterize this proof as comprehensible to a novice without number-theoretic background would, accordingly, be an oversimplification; while an elementary understanding of the theorem’s implications may indeed be attainable, a rigorous engagement with its proof necessitates substantial familiarity with algebraic and diophantine concepts, including the descent method, abelian group structures, and the arithmetic geometry of height functions. mordell and weil’s finite generation theorem, thus, implicates not merely the boundedness of rational points but also exemplifies the structural richness and the intrinsic limitations that these elliptic curves exhibit within the broader mathematical landscape, solidifying its importance within the annals of number theory and underscoring its enduring significance in the study of elliptic structures over the rational field 130.74.58.21 (talk) 23:48, 14 November 2024 (UTC)
- Wow, thanks very much for the detailed response. I understood a fair amount of it and will try to digest it some more. I think I'm still confused on a fairly basic issue and will try to figure out what I'm missing. The issue is that we are talking about a finite group, right? So can we literally write out the whole group table and find the subgroup structure? That would be purely combinatorial so I must be missing something. 2601:644:8581:75B0:0:0:0:2CDE (talk) 03:25, 15 November 2024 (UTC)
- Oh wait, I think I see where I got confused. These are elliptic curves over Q rather than over a finite field, and the number of rational points is usually infinite. Oops. 2601:644:8581:75B0:0:0:0:2CDE (talk) 10:09, 15 November 2024 (UTC)
- This response is pretty obviously LLM-generated, so don't expect it to be correct about any statements of fact. 100.36.106.199 (talk) 18:26, 15 November 2024 (UTC)
- Yeah you are probably right, I sort of wondered about the verbosity and I noticed a few errors that looked like minor slip-ups but could have been LLM hallucination. But, it was actually helpful anyway. I made a dumb error thinking that the curve group was finite. I had spent some time implementing EC arithmetic on finite fields and it somehow stayed with me, like an LLM hallucination.
I'm still confused about where GRH comes in. Like could it be that rank E = 29 if GRH, but maybe it's 31 otherwise, or something like that? Unfortunately the question is too elementary for Mathoverflow, and I don't use Stackexchange or Reddit these days. 2601:644:8581:75B0:0:0:0:2CDE (talk) 22:32, 15 November 2024 (UTC)
- Ok so I don't know anything about this but: it seems that the GRH implies bounds of various explicit kinds on various quantities (e.g.) and therefore you can end up in a situation where you show by one method that there are 29 independent points, and then also the GRH implies that the rank is at most 29, so you get equality. There is actually some relevant MO discussion: [2]. Here is the paper that used the GRH to get the upper bound 28 on the earlier example. 100.36.106.199 (talk) 23:55, 15 November 2024 (UTC)
- Thanks, I'll look at those links. But, I was also wondering if there is a known upper bound under the negation of the GRH. 2601:644:8581:75B0:0:0:0:2CDE (talk) 02:47, 16 November 2024 (UTC)
- Yeah I don't know anything about that, but it seems like a perfectly reasonable MO question. 100.36.106.199 (talk) 02:14, 20 November 2024 (UTC)
- Thanks, I'll look at those links. But, I was also wondering if there is a known upper bound under the negation of the GRH. 2601:644:8581:75B0:0:0:0:2CDE (talk) 02:47, 16 November 2024 (UTC)
- Ok so I don't know anything about this but: it seems that the GRH implies bounds of various explicit kinds on various quantities (e.g.) and therefore you can end up in a situation where you show by one method that there are 29 independent points, and then also the GRH implies that the rank is at most 29, so you get equality. There is actually some relevant MO discussion: [2]. Here is the paper that used the GRH to get the upper bound 28 on the earlier example. 100.36.106.199 (talk) 23:55, 15 November 2024 (UTC)
- Yeah you are probably right, I sort of wondered about the verbosity and I noticed a few errors that looked like minor slip-ups but could have been LLM hallucination. But, it was actually helpful anyway. I made a dumb error thinking that the curve group was finite. I had spent some time implementing EC arithmetic on finite fields and it somehow stayed with me, like an LLM hallucination.
- This response is pretty obviously LLM-generated, so don't expect it to be correct about any statements of fact. 100.36.106.199 (talk) 18:26, 15 November 2024 (UTC)
November 15
[edit]Are there morphisms when enlarging a prime field sharing a common suborder/subgroup ?
[edit]Simple question : I have a prime field having modulus where p−1 contains as prime factor, and I have a larger prime field also having as it’s suborder/subgroup. Are there special cases where it’s possible to lift 2 ’s elements to modulus while keeping their discrete logarithm if those 2 elements lies only within the ’s subgroup ? Without solving the discrete logarithm of course ! 82.66.26.199 (talk) 11:36, 15 November 2024 (UTC)
- Clearly it is possible, since any two groups of order o are isomorphic. Existence of a general algorithm, however, is equivalent to solving the discrete log problem (consider the problem of determining a non-trivial character). Tito Omburo (talk) 11:40, 15 November 2024 (UTC)
- So how to do it without solving the discrete logarithm ? Because of course, I was meaning without solving the discrete logarithm. 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 12:51, 15 November 2024 (UTC)
- It can't. You're basically asking if there is some canonical isomorphism between two groups of order O, and there just isn't one. Tito Omburo (talk) 15:00, 15 November 2024 (UTC)
- Even if it’s about enlarging instead of shrinking ? Is in theory impossible to build a relation/map or is that no such relation exists yet ? 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 08:48, 16 November 2024 (UTC)
- At least into the group of complex roots of unity, where a logarithm is known, it is easily seen to be equivalent to discrete logarithm. In general, there is no relation between the groups of units in GF(p) and GF(q) for p and q distinct primes. Any accidental isomorphisms between subgroups are not canonical. Tito Omburo (talk) 15:02, 16 November 2024 (UTC)
- Even if it’s about enlarging instead of shrinking ? Is in theory impossible to build a relation/map or is that no such relation exists yet ? 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 08:48, 16 November 2024 (UTC)
- It can't. You're basically asking if there is some canonical isomorphism between two groups of order O, and there just isn't one. Tito Omburo (talk) 15:00, 15 November 2024 (UTC)
- So how to do it without solving the discrete logarithm ? Because of course, I was meaning without solving the discrete logarithm. 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 12:51, 15 November 2024 (UTC)
November 16
[edit]What’s the secp256k1 elliptic curve’s rank ?
[edit]Simple question : what’s the rank of secp256k1 ?
I failed to find how compute the rank of an elliptic curve using the version of online tools like SageMath or Pari/gp since it’s the only thing I have access to… 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 15:44, 16 November 2024 (UTC)
- I don't know a clear answer but a related question is discussed here. 2601:644:8581:75B0:0:0:0:2CDE (talk) 01:57, 17 November 2024 (UTC)
- Although I know it doesn t normally apply to this curvd, I was reading this paper https://pdfupload.io/docs/4ef85049. As a result, I am very curious about knowing the rank of secp256k1 which is why I asked it especially if it allows me know how to compute them on ordinary curves. 2A01:E0A:401:A7C0:417A:1147:400C:C498 (talk) 11:01, 17 November 2024 (UTC)
- Maybe by some chance, this might have the answer. ExclusiveEditor Notify Me! 19:20, 17 November 2024 (UTC)
- Same question by same questioner, so not by chance. --Lambiam 06:51, 18 November 2024 (UTC)
- Yes, It’s me who asked the question. He didn’t replied to my last comment about the elliptic curve prime case. I’m meaning the paper 2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D (talk) 07:08, 18 November 2024 (UTC)
- Same question by same questioner, so not by chance. --Lambiam 06:51, 18 November 2024 (UTC)
November 17
[edit]Final four vote probability
[edit]In a social deduction game at the final four where nobody is immune and each of the four gets one vote what is the probability of a 1–1–1–1 vote? (78.18.160.168 (talk) 22:26, 17 November 2024 (UTC))
- Social deduction games exist in many different versions, with different rules. Can you provide (a link to) a description of the precise rules of the version of the game you want us to consider?
- Moreover, if the players can follow different strategies, or can follow their intuitions instead of rolling the dice and using the outcome according to the fixed strategy, the situation cannot be viewed as a probability problem. Can we assume that the players play with the same given independent and identically random strategy? --Lambiam 06:47, 18 November 2024 (UTC)
- I was thinking of The Traitors, but it could also be applied to Survivor: Pearl Islands. There are no dice. In The Traitors before the final four banishment vote, there is a vote on whether to end game or banish again. If everyone votes to end the game the game ends but if one or more people votes to banish again, the game continues. I jumped ahead to the banishment vote because I have not seen a season where all four people vote to end the game. PS my IP address has changed. (78.16.255.186 (talk) 20:24, 18 November 2024 (UTC))
- I don't understand the rules from the description in The Traitors and don't know what a "1" vote signifies, but in any case, this does not look like it can be modelled as a mathematical probability problem, for a host of reasons. The outcome of a vote will generally depend on the dispositions of the participants (are they more rational or more likely to choose on a whim; are they good in interpreting the behaviour of others) as well on their past behaviours. It is not possible to assign probabilities to such factors, and there is no mathematical model for how such factors influence the voting. --Lambiam 03:58, 19 November 2024 (UTC)
- I was thinking of The Traitors, but it could also be applied to Survivor: Pearl Islands. There are no dice. In The Traitors before the final four banishment vote, there is a vote on whether to end game or banish again. If everyone votes to end the game the game ends but if one or more people votes to banish again, the game continues. I jumped ahead to the banishment vote because I have not seen a season where all four people vote to end the game. PS my IP address has changed. (78.16.255.186 (talk) 20:24, 18 November 2024 (UTC))
- If you simplify much further to just "if you have four people, and each one randomly chooses someone (that is not the person themself), what's the probability that each person gets chosen once", then we can generalize this to some arbitrary people.
- Let us assign each person some number from to , so that each choice can be thought of as a mapping from to itself. When each person is chosen exactly once, this corresponds to a mapping from to itself where no number is mapped to itself. This is a derangement, and we can see that the number of ways of tied voting is exactly the number of derangements for people. Thus, the probability for is the number of derangements divided by the number of mappings where no one votes for themselves.
- The number of derangements on elements is the subfactorial of , denoted . As for total number of mappings, each of the people has choices, so there are such mappings. This brings the probability to .
- For the number of derangements is , and there are mappings where no one votes for themselves, so the probability is . More generally, , so the probability in general is . Note that this tends to as increases. GalacticShoe (talk) 06:00, 19 November 2024 (UTC)
November 19
[edit]Basic equations / functions in predicting probability of success in insurgent vs. conventional military engagements in mid-to-late 20th century warfare / calculations for probability of the success of insurgent movements (esp. with consideration of intangible factors)
[edit]can someone kindly uncover casualty rolls -
I am thinking in particular about the Ukrainian Insurgent Army and the debates which went on within the American special services in the late 40s through early 50s about providing assistance to them
after the breakthrough of the 'Iron company' (you can look up on ukr, pol, rus wiki about the so-called Iron company of the UPA ; Залiзна сотнья) from Transcarpathia in Communist-occupied Ukraine through Czechoslovakia through to Bavaria (where there were already in residence many leaders of the Ukrainian movement who had been interned by the Germans, most prominent among these Stepan Andriiovich, of course,
working to raise the Ukrainian issue in the consciousness both of the public in Western 'free' world, and in the minds of the military-political authorities,
who were still reeling from the taste in their mouths of the 'betrayal' of Poland, which Churchill railed against, closer, as he was, to the heart of the issue,
if we have these figures, we can make at the very least basic calculations, and predict with a degree of accuracy, for example,
based on the help that the Americans were considering to render to the Ukrainian freedom fighters, the successes which they could have achieved
considering also the concurrent armed struggles in Romania, in Poland, in the Baltic states — Preceding unsigned comment added by 130.74.59.208 (talk) 15:15, 18 November 2024 (UTC)
- This all seems very interesting, but I don't see it as mathematics question. I suggest you try the History Stack Exchange. --RDBury (talk) 19:19, 18 November 2024 (UTC)
- i should like to refuse with one regard only the question pertains to application of mathematics and hard sciences in interpretation of historical events and possibilities 130.74.59.186 (talk) 20:02, 18 November 2024 (UTC)
- Full stops were invented for a reason: they are very useful in making text understandable. --Lambiam 04:07, 19 November 2024 (UTC)
- i should like to refuse with one regard only the question pertains to application of mathematics and hard sciences in interpretation of historical events and possibilities 130.74.59.186 (talk) 20:02, 18 November 2024 (UTC)
- There is no mathematical theory that can be used for determining the probabilities of the possible outcomes of a real-world conflict. It is not even clear that the notion of probability applies in such situations. --Lambiam 04:14, 19 November 2024 (UTC)
- This seems like more the province of game theory than probability. That it's modelled using probability in e.g. simulations, such as computer games or board games, is due to the limitations of their models. They can't fully model the behaviour of all actors so they add random probabilistic factors to compensate. But those actually engaged in conflict aren't going to be using randomness, just the best strategy based on what they know about the conflict, including what the other side(s) will do. That's game theory.--217.23.224.20 (talk) 15:49, 19 November 2024 (UTC)
November 20
[edit]Sequences: Is there a name for a sequence, all of whose members are different from each other?
[edit]2A06:C701:7455:4600:C907:E8C0:F042:F072 (talk) 09:07, 20 November 2024 (UTC)
- A term used in the literature: injective sequence.[3] --Lambiam 13:18, 20 November 2024 (UTC)
November 21
[edit]Is it possible to adapt Nigel’s Smart algorithm for solving discrete logarithms when the curve is only partially anomalous ?
[edit]An anomalous elliptic curve is a curve for which . But in my case, the curve has order j×q and the underlying field has order i×q. In the situation I’m thinking about, I do have 2 points such as both G∈q and P∈q subgroup and where P=s×G.
So since the scalar lies in a common part of the additive group from both the curve along it’s underlying base field, is it possible to transfer the discrete logarithm to the underlying finite field ? Or does anomalous curves requires the whole embedding field’s order to match the one of the curve even if the discrete logarithm solution lies into a common smaller group ?
If yes, how to adapt the Nigel’s smart algorithm used for solving the discrete logarithm inside anomalous curves ? The crux of the question is getting the common suborder/subgroup to act as an additive group 82.66.26.199 (talk) 19:47, 21 November 2024 (UTC)