Wikipedia:Reference desk/Archives/Mathematics/2024 May 14
Appearance
Mathematics desk | ||
---|---|---|
< May 13 | << Apr | May | Jun >> | May 15 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
May 14
[edit]Worst case performance of a randomized primality test?
[edit]I am trying to get a idea about how efficient something like this might be. Let's say we have a number N composed of B binary bits, and we now want to generate a "certificate of primality" using the following method: First we choose a "confidence" parameter C which indicates the degree of statistical certainty desired. Then, (1) Select a random number R < N. (2) Take the GCD of N with R; if it is not 1 then N is composite, otherwise we consider R to be a "witness" of N's (possible) primality. (3) Repeat until the level of confidence reaches C. Only question is, how to determine the number of iterations needed until C is satisfied? Earl of Arundel (talk) 16:39, 14 May 2024 (UTC)
- That is usually what is done and its a good way to do it - but it can fall foul of problems like Pseudoprimes. so a better test is done tham the straightforward Fermat's little theorem test. See also AKS primality test for why a full test isn't normally done. The probabalistic tests linked from there give an estimate of their effectiveness - which is very good indeed. NadVolum (talk) 17:29, 14 May 2024 (UTC)
- Yes of course, I am currently using the Miller-Rabin primality test which does exhibit a very good "average performance". However my question was more along the lines of "how much less efficient would it be to do randomized GCD tests instead"? Because surely even *that* would produce a bona fide "certificate of confidence" (albeit perhaps much slower than otherwise). Now I do know that the probability of any two random variables being comprime is , I just can't seem to figure out how to "interpolate" that into a reliable (if inefficient) probabalistic test. Earl of Arundel (talk) 18:16, 14 May 2024 (UTC)
- If the number submitted to the famous EoA primality test may have been selected by an adversary, it can be the product of two almost equal primes. The probability that a single random test detects its non-primality is then about This means that you need about independent random GCD tests before you can assert with confidence that the number is prime. For example, for you need some 228,818 random tests to achieve a 99% confidence level. Straightforward primality testing by trying successive odd numbers as divisors while skipping multiples of requires trial divisions, for the example 33,124. --Lambiam 19:29, 14 May 2024 (UTC)
- Wow, so not even close to being nearly as efficient as the humble trial division algorithm. (I don't know why that surprises me, it is more or less a "scatter-shot" approach to primality testing.) Of course Miller-Rabin blows both of those out of the water. Not only is the required number of iterations relatively low, the best-case performance WRT detecting composites is typically excellent. (I think it even outperforms AKS on average.) Very elegant formulation of the lower-bound there, by the way. I wouldn't have thought it could be reckoned so precisely. Kudos! Earl of Arundel (talk) 20:26, 14 May 2024 (UTC)
- One reason some primality tests run more efficiently is that they only tell you what you want to know, is the number a prime? If you want to know more than that, a factor if it's composite, that will take longer; you're factoring the number instead of just showing that it's composite. A GCD test would produce a factor so it's bound to be less efficient. (Of course, someone could come up with a factorization algorithm that's as fast as the fastest primality test, but that hasn't happened so far, and that's why non-factoring primality tests still have a niche.) --RDBury (talk) 03:58, 15 May 2024 (UTC)
- Wow, so not even close to being nearly as efficient as the humble trial division algorithm. (I don't know why that surprises me, it is more or less a "scatter-shot" approach to primality testing.) Of course Miller-Rabin blows both of those out of the water. Not only is the required number of iterations relatively low, the best-case performance WRT detecting composites is typically excellent. (I think it even outperforms AKS on average.) Very elegant formulation of the lower-bound there, by the way. I wouldn't have thought it could be reckoned so precisely. Kudos! Earl of Arundel (talk) 20:26, 14 May 2024 (UTC)
- If the number submitted to the famous EoA primality test may have been selected by an adversary, it can be the product of two almost equal primes. The probability that a single random test detects its non-primality is then about This means that you need about independent random GCD tests before you can assert with confidence that the number is prime. For example, for you need some 228,818 random tests to achieve a 99% confidence level. Straightforward primality testing by trying successive odd numbers as divisors while skipping multiples of requires trial divisions, for the example 33,124. --Lambiam 19:29, 14 May 2024 (UTC)
- Yes of course, I am currently using the Miller-Rabin primality test which does exhibit a very good "average performance". However my question was more along the lines of "how much less efficient would it be to do randomized GCD tests instead"? Because surely even *that* would produce a bona fide "certificate of confidence" (albeit perhaps much slower than otherwise). Now I do know that the probability of any two random variables being comprime is , I just can't seem to figure out how to "interpolate" that into a reliable (if inefficient) probabalistic test. Earl of Arundel (talk) 18:16, 14 May 2024 (UTC)