Jump to content

Trial division

From Wikipedia, the free encyclopedia
(Redirected from Trial factoring)

Trial division is the most laborious but easiest to understand of the integer factorization algorithms. The essential idea behind trial division tests to see if an integer n, the integer to be factored, can be divided by each number in turn that is less than the square root of n.

For example, to find the prime factors of n = 70, one can try to divide 70 by successive primes: first, 70 / 2 = 35; next, neither 2 nor 3 evenly divides 35; finally, 35 / 5 = 7, and 7 is itself prime. So 70 = 2 × 5 × 7.

Trial division was first described by Fibonacci in his book Liber Abaci (1202).[1]

Method

[edit]

Given an integer n (n refers to "the integer to be factored"), the trial division consists of systematically testing whether n is divisible by any smaller number. Clearly, it is only worthwhile to test candidate factors less than n, and in order from two upwards because an arbitrary n is more likely to be divisible by two than by three, and so on. With this ordering, there is no point in testing for divisibility by four if the number has already been determined not divisible by two, and so on for three and any multiple of three, etc. Therefore, the effort can be reduced by selecting only prime numbers as candidate factors. Furthermore, the trial factors need go no further than because, if n is divisible by some number p, then n = p × q and if q were smaller than p, n would have been detected earlier as being divisible by q or by a prime factor of q.

A definite bound on the prime factors is possible. Suppose Pi is the i'th prime, so that P1 = 2, P2 = 3, P3 = 5, etc. Then the last prime number worth testing as a possible factor of n is Pi where P2i + 1 > n; equality here would mean that Pi + 1 is a factor. Thus, testing with 2, 3, and 5 suffices up to n = 48 not just 25 because the square of the next prime is 49, and below n = 25 just 2 and 3 are sufficient. Should the square root of n be an integer, then it is a factor and n is a perfect square.

An example of the trial division algorithm, using successive integers as trial factors, is as follows (in Python):

def trial_division(n: int) -> list[int]:
    """Return a list of the prime factors for a natural number."""
    a = []               # Prepare an empty list.
    f = 2                # The first possible factor.    
    while f*f <= n:      # While f is no larger than the square root of n ...
        if n % f == 0:   # The remainder of n divided by f might be zero.        
            a.append(f)  # If so, it divides n. Add f to the list.
            n //= f      # Divide that factor out of n.
        else:            # But if f is not a factor of n,
            f += 1       # Add one to f and try again.
    if n != 1:
        a.append(n)      # Add the remaining prime n to the list
    return a             # Prime factors may be repeated: 12 factors to 2,2,3.

This version tests every integer up to the square root of n, not just primes. A more complicated implementation only testing primes would be significantly faster in the worst case.

Speed

[edit]

In the worst case, trial division is a laborious algorithm. For a base-2 n digit number a, if it starts from two and works up only to the square root of a, the algorithm requires

trial divisions, where denotes the prime-counting function, the number of primes less than x. This does not take into account the overhead of primality testing to obtain the prime numbers as candidate factors. A useful table need not be large: P(3512) = 32749, the last prime that fits into a sixteen-bit signed integer and P(6542) = 65521 for unsigned sixteen-bit integers. That would suffice to test primality for numbers up to 655372 = 4,295,098,369. Preparing such a table (usually via the Sieve of Eratosthenes) would only be worthwhile if many numbers were to be tested. If instead a variant is used without primality testing, but simply dividing by every odd number less than the square root the base-2 n digit number a, prime or not, it can take up to about:

In both cases, the required time grows exponentially with the digits of the number.

Even so, this is a quite satisfactory method, considering that even the best-known algorithms have exponential time growth. For a chosen uniformly at random from integers of a given length, there is a 50% chance that 2 is a factor of a and a 33% chance that 3 is a factor of a, and so on. It can be shown that 88% of all positive integers have a factor under 100 and that 92% have a factor under 1000. Thus, when confronted by an arbitrary large a, it is worthwhile to check for divisibility by the small primes, since for , in base-2 .

However, many-digit numbers that do not have factors in the small primes can require days or months to factor with the trial division. In such cases other methods are used such as the quadratic sieve and the general number field sieve (GNFS). Because these methods also have superpolynomial time growth a practical limit of n digits is reached very quickly. For this reason, in public key cryptography, values for a are chosen to have large prime factors of similar size so that they cannot be factored by any publicly known method in a useful time period on any available computer system or computer cluster such as supercomputers and computer grids. The largest cryptography-grade number that has been factored is RSA-250, a 250-digit number, using the GNFS and resources of several supercomputers. The running time was 2700 core years.

References

[edit]
  1. ^ Mollin, Richard A. (2002). "A brief history of factoring and primality testing B. C. (before computers)". Mathematics Magazine. 75 (1): 18–29. doi:10.2307/3219180. JSTOR 3219180. MR 2107288.
[edit]