Jump to content

Talk:Computational complexity of mathematical operations

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled

[edit]

I suppose that where you say:

"Here, complexity refers to the time complexity of performing computations on a Turing machine."

you should say:

"Here, complexity refers to the time complexity of performing computations to a RAM (Random Access Machine)."

Usually when talking about polynomical time operations, RAM models are used instead of TM. This is because we can simulate a RAM in a TM, but only with an overhead of n^2. They are equally powerfull but minimum time, changes.

Thanks for pointing this out. Fredrik Johansson 19:52, 31 March 2007 (UTC)[reply]
The complexity of integer multiplication given in the article is for multitape Turing machines (as defined in the book by Strassen et al.), it is possible to multiply faster on random access machines. (As far as I know, the remaining complexity results stated in the article are not affected.) Mm (talk) —Preceding undated comment was added at 15:49, 25 January 2009 (UTC).[reply]

Why isn't exponentiation listed? —Preceding unsigned comment added by 90.128.188.188 (talk) 12:00, 17 May 2008 (UTC)[reply]

Perhaps because the most commonplace clever exponentiation algorithm at the heart of the discrete-logarithm cryptographic techniques is actually not calculating the full product of the repeated multiplications because a number that occupies O(2^160) bits does not fit in RAM or hard disk on conventional computers and would take too long to calculate anyway. Instead, it is calculating that product modulo a prime number, which permits converting the exponent to sums and products of even and odd numbers (or other combinations of additions and subtractions) that equal the value of the exponent. Thus, those 2 are not an apples-to-apples comparison. Likewise, calculating exponentiation in base-b is as small as L(1), whatever your complexity L of your chosen left shift is. For example, on O(n) base-2-based hardware, left-shift appears to software as O(1) because the hardware provided O(n) logic gates to accomplish the operation. In a positional number system for very large numbers, perhaps shifting 1 to the left 1 million bits might imply O(n) writes to memory to zero-out a bunch of words, such as 1,000,000/32=31,250 writes to RAM on a 32-bit processor or 1,000,000/64=15,625 writes to RAM. Clearly as the number of digits increases beyond the natural register size of the processor, left shift in a positional number system has a time complexity of O(n). But if someone is interested in efficient exponentiation in a base-b number system, then that person would likely be interested in the time complexity of converting to and from base-b. Knuth's Seminumerical Algorithms discusses base-b conversions; this is easier to find in the table of contents than in the index, by the way. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

What does lack of fastest algorithm mean?

[edit]

I think that someone who is familiar with Schnorr's and Stumpf's work should explain this claim that no multiplication algorithm is fastest. Either this is trivially true for all algorithms, not merely for multiplication: there is always some small-sized scenarios where one of the high-growth-rate algorithms outperforms the slower-growth-rate algorithms. Or this is misleadingly worded and presented. Certainly, the shallowest meaning cannot be what was intended: no matter how many civilizations expended a seemingly infinite amount of effort in multiplication-algorithm research, there is always a faster (e.g., slower growth rate) multiplication algorithm that has yet to be discovered. Even if such an odd reading is what is intended, isn't that trivially true for all algorithms, not merely multiplication? Please explain further, especially in the article. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

Fastest means asymptotically fastest. It is not true that a faster algorithm can always be found: it is easy to prove that the ordinary O(n) algorithm is optimal for addition (because each digit has to be visited at least once). Fredrik Johansson 19:15, 29 June 2008 (UTC)[reply]
Your response is a nonsequitor. Plus you did not modify the article, as I requested. I am asking that the article overtly explain what "is not the fastest" means exactly. What does Schnorr's & Stumpf's work precisely say when it makes this claim that that Fürer's algorithm is not the asymptotically fastest multiplication algorithm. Does Schnorr's & Stumpf's work say that Bridgeman's O(n) multiplication algorithm to be published in the year 2032 is asymptotically still faster? Or does Schnorr's & Stumpf's work likewise say in 2032 that Bridgeman's algorithm once again is not the asymptotically fastest multiplication algorithm and that a still faster one exists? and so forth for all subsequent multiplication algorithms... This concept of what "not the asymptotically fastest" means is not presented in this article at all. Indeed, it is not presented in this discussion page at all, including in your nonsequitor response. If it is not fully explained, I am going to remove the cryptic, unexplained reference to Schnorr's & Stumpf's work. At a very bare minimum, this article must hyperlink to another Wikipedia article that explains what Schnorr's & Stumpf's "not the asymptotically fastest" claim precisely means, if it does not explain it herein.—optikos (talk) 01:35, 3 August 2008 (UTC)[reply]
It means exactly what it says. There is no (asymptotically) fastest algorithm. In other words, for every algorithm, you can find another algorithm that is (asymptotically) faster; the second option in what you say above. I don't know a clearer way to explain this, but I'm open to suggestions. -- Jitse Niesen (talk) 13:07, 3 August 2008 (UTC)[reply]
I agree with User:Optikos. In its current wording, the statement is confusing and either nonsensical or trivial, and I am thus removing it since no effort has been made to make the statement any clearer and address the concerns Optikos brought up. —Lowellian (reply) 22:13, 11 May 2013 (UTC)[reply]

Long division

[edit]

Why is long division assuming a single multiplication? Methinks that the O(n^2) of schoolbook long division, which is presented as M(1), where the chosen multiplication algorithm is schoolbook multiplication, is not based on the same foundations as Newton method's M(n). I suspect that long division should be presented as O(n^3) here, where for n digits there are n multiplication-subtraction pairs; multiplication dominates in complexity over subtraction; thus there are a quantity n of O(n^2) schoolbook multiplications. O(n) times O(n^2) is O(n^3) for schoolbook long division. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

There are O(n^2) single-digit multiplications, which take time O(n^2). Newton's method can find the answer with a fixed number of full-length multiplications, for time O(k * M(n)) = O(M(n)). CRGreathouse (t | c) 00:51, 6 July 2008 (UTC)[reply]
Then I consider our notation here to be a highly confusing abuse of Landau notation. Here M(n) means one iteration of the n-bit multiplication algorithm, not quantity-n multiplications. Perhaps a notation such as M subscript n (1) more directly states that Newton method is a constant quantity of n-bit multiplications. To me M(n) implies n multiplications, not a single n-bit multiplication. Thus this abuse-of-Landau notation appears to imply n*O(n * log n * 2^(log* n)) = O(n^2 * log n * 2^(log* n)) which is clearly wrong.—optikos (talk) 01:35, 3 August 2008 (UTC)[reply]
M(n) is not in any way a Landau (or Landau-like) notation, it is an ordinary function of n. The usage in our article is completely standard.—Emil J. 17:36, 12 November 2012 (UTC)[reply]

Matrix Inversion

[edit]

Can someone add 'Matrix Solution'? I.e. finding the solution of Ax = b without explicitly finding A^-1. This is faster than using matrix inversion and can be done using LU decomposition I think. —Preceding unsigned comment added by 93.97.48.217 (talk) 11:23, 22 October 2008 (UTC)[reply]

Actually, inverting with Strassen's algorithm takes less time than an LU decomposition. I'm not sure what methods are used, practically speaking, to find the solution for a matrix equation with large matrices. LU decomposition is certainly more stable, but if the matrices are large enough I imagine faster methods are desirable. (If a 100 x 100 matrix equation takes the same amount of time with Strassen's algorithm and LU decomposition, I'd expect a 1,000,000 x 1,000,000 matrix equation would be 5-6 times faster with Strassen's.)
I don't know if there's a fast (subcubic) algorithm that avoids the numerical instability of computing a literal inverse.
CRGreathouse (t | c) 15:06, 18 January 2009 (UTC)[reply]

Are there any citations of the divide and conquer algorithm? How to divide the matrix into four so that the two of them are invertible is not so obvious. Liuhb86 (talk) 22:28, 6 January 2013 (UTC)[reply]

Did you follow the link to Invertible matrix#Blockwise inversion?—Emil J. 15:03, 7 January 2013 (UTC)[reply]
Yes, the description there is exactly the same, without citation. I also referenced Block matrix pseudoinverse. All these articles take the invertibility of A and D as prerequisites. Liuhb86 (talk) 06:33, 8 January 2013 (UTC)[reply]
I found a reference and added it to the article. The trick is that if M is a nonsingular matrix, then M−1 = (MTM)−1MT, and MTM is a symmetric positive definite matrix, which guarantees that the two subblocks needed to be inverted by the algorithm are indeed invertible.—Emil J. 14:10, 8 January 2013 (UTC)[reply]
Yes, that's right. Thanks a lot. I should have read CLRS more carefully. Liuhb86 (talk) 07:16, 9 January 2013 (UTC)[reply]

k-way Toom–Cook multiplication ε

[edit]

User EmilJ added a value for ε in the k-way Toom–Cook multiplication: ε = log (2k − 1)/log k. I suppose the correct value of the big-O notation with that value of ε is: O(nε). I will edit the page. Kaluppollo (talk) 14:49, 5 November 2009 (UTC)[reply]

Ah yes, you're right, I forgot to remove the "1+". — Emil J. 15:03, 5 November 2009 (UTC)[reply]

Matrix operations

[edit]

These are notes for a section of the article.

Determinant: Let M be an n×n real matrix with all entries Then det MA003433(n) · rnnn/2 · rn. Thus if the entries of M have at most b bits, the determinant is at most nb + 0.5n lg n bits long.

CRGreathouse (t | c) 02:51, 18 April 2010 (UTC)[reply]

Compare.

[edit]

Missing from this article is the compare operation. 93.95.251.162 (talk) 12:39, 10 August 2010 (UTC) Martin.[reply]

It depends on the format of the numbers and the distribution of values. Worst case for any reasonable implementation is O(log n). Average case for reasonable distributions is O(1). CRGreathouse (t | c) 15:23, 28 April 2011 (UTC)[reply]

Some requests for clarification

[edit]

The term "schoolbook multiplication" (etc.) confused me until I remembered that I have always known the same as "elementary-school multiplication". A quick web search came up with other terminology as well. So a clarification would be welcome. I also need clarification for the term "fixed-size number". I have no idea what is meant.

Otherwise, I would like to thank the contributors, it is a valuable article. AmirOnWiki (talk) 16:20, 20 September 2011 (UTC)[reply]

Schoolbook multiplication and elementary-school multiplication refer to the same thing.
"Fixed-size number" simply means a non-variable-size number, e.g. any number that is a specific number of digits in some base.
Lowellian (reply) 22:18, 11 May 2013 (UTC)[reply]

complexity of sqrt

[edit]

It is written that complexity(sqrt) = M(n), but isn't there another factor to be added, since the number of iterations of Newton's/Babylonian method is ~ log(n), and each iteration involves at least a division. — MFH:Talk 19:28, 11 November 2012 (UTC)[reply]

No. The point is that Newton iteration is quadratically convergent, i.e., each iteration roughly doubles the number of correct digits. You only do the computation with accuracy matching (an upper bound on) the number of significant digits, thus the last iteration is done with n digits and takes time O(M(n)), the last but one with ~ n/2 digits using time O(M(n/2)), the one before that with ~ n/4 digits using time O(M(n/4)), and so on. Since M(n/2k) ≤ M(n)/2k, the total running time is bounded by O(M(n)(1 + 1/2 + 1/4 + ...)) = O(M(n)). (The same trick is also used to get the complexity of division one line above.)—Emil J. 15:03, 12 November 2012 (UTC)[reply]

Modular exponentiation Input

[edit]

Modular exponentiation, ab mod c, takes three arguments (i.e. a, b and c). The table states "Two n-digit numbers and a k-bit exponent". This indicates that the base and the modulo needs to be of the same bit-size. It may be reasonable to say that we only need the modulo, to know the bound of the base. But in that case, why input them both?

A more accurate complexity analysis would need three separate and independent parameters. As with multiplication, we can't assume that modulo can be done in constant time with arbitrarily large numbers. — Preceding unsigned comment added by Per.Zut (talkcontribs) 12:13, 5 August 2013 (UTC)[reply]

Complexity column of Arithmetic functions table

[edit]

The complexity has different values for different operations/rows. For example O(n2) for Addition means the number of additions and O(n1.585) for Karatsuba means O(n1.585) multiplications but some additions are also required and total number of operations will O(n2) — Preceding unsigned comment added by Ois1977 (talkcontribs) 09:20, 7 August 2013 (UTC)[reply]

No. Generally, all the bounds are in terms of bit operations. I don’t quite follow your examples, as there is no “O(n2) for Addition” in the table, and as for Karatsuba, reading it as “one multiplication takes multiplications” is so absurd that you probably mean something else. Anyway, is the grand total of bit operations for Karatsuba, there is no hidden anywhere, that’s the whole point of the algorithm.—Emil J. 12:20, 7 August 2013 (UTC)[reply]

Unclear notation

[edit]

What does M(n) mean? Blackbombchu (talk) 17:13, 10 April 2014 (UTC)[reply]

From the article - "Note: Due to the variety of multiplication algorithms, M(n) below stands in for the complexity of the chosen multiplication algorithm." JumpDiscont (talk) 01:25, 3 June 2014 (UTC)[reply]

Complexity measure

[edit]

This article refers to sequential complexity (or, equivalently, multi-tape TM), and this is appropriate for serial software. However, the appropriate complexity measure for hardware development and for parallel software development is quite different; see https://en.wikipedia.org/wiki/NC_%28complexity%29 for parallel RAM complexity.

This objection to the choice of complexity measure is important because of significant interest in arithmetic operation in hardware, which typically maximizes bit-level parallelism. And from a hardware developer's point of view all of the operations' complexities listed in the articles are wrong. For example, it's well-known that addition/subtraction of n-bit integers can be performed in O(log(n)) time (although these adders' chip areas tends to be larger), and many architectures implement O(sqrt(n)) adders that are fairly small and practical. Refer http://www.amazon.com/Digital-Arithmetic-Kaufmann-Computer-Architecture/dp/1558607986 for details.

The table in this article would be incomplete without an additional column, "Parallel Complexity", that would list NC-style parallel RAM complexity of arithmetic operations. Mishapom (talk) 13:46, 28 October 2014 (UTC)[reply]

Complexity of Euclidean Algorithm

[edit]

Shouldn't the complexity of the Euclidean Algorithm be O(n^3) rather than O(n^2), where the input numbers have n digits? On the wiki page of the Euclidean Algorithm it says that the algorithm needs O(n) divisions. A schoolbook divison would make O(n^2) elementary steps and therefore the complexity of the whole algorithm should read O(n*n^2=n^3) to my mind. Even with fast division algorithms the complexity has to be greater than O(n^2). As I'm not sure with this, I wanted to ask before I change something. — Preceding unsigned comment added by 128.176.12.215 (talk) 16:01, 3 August 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on Computational complexity of mathematical operations. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 20:24, 11 August 2017 (UTC)[reply]

Complexity of exponentiation by squaring

[edit]

The "Arithmetic Functions" table states that exponentiation by squaring has complexity where is the number of bits in the exponent. This complexity disagrees with the analysis in Exponentiation_by_squaring#Computational_complexity, where it shown that computing needs a runtime that is at least linear in , the value of the exponent, which is exponential in , the number of bits in the exponent. Any thoughts on how to clear up this confusion? Bulkroosh (talk) 17:03, 13 March 2019 (UTC)[reply]

In the table of this article, it is the modular exponentiation that is considered. This means that the size of the integers does not increase during the computation, and thus that each multiplication takes the time On the other hand, in the linked article, this is the integer exponentiation that is considered. This means that the size of the integer is doubled after each squaring. It is thus that is involved there. D.Lazard (talk) 17:50, 13 March 2019 (UTC)[reply]
Would it make sense to add an entry in the table for integer exponentiation? Bulkroosh (talk) 18:59, 13 March 2019 (UTC)[reply]

Multiplication in n log n

[edit]

https://hal.archives-ouvertes.fr/hal-02070778 is a theoretically interesting article, but it's not suitable for Wikipedia until it's been peer reviewed and published.--Prosfilaes (talk) 04:18, 4 May 2019 (UTC)[reply]

According to this, and other sources, n*log(n) has been achieved. Bubba73 You talkin' to me? 23:14, 18 October 2019 (UTC)[reply]
This is probably true, but for the moment, there is no source that validate the result; this is means that, for the moment, no competent expert has read the article and attests that he cannot find any mistake in it. Such an expert should normally be a reviewer of an established scientific journal. The sources that you mention are all news media, and do not provide any validation. The article you cite is clear, as the title says "may have found" and not "have found. So, this is not yet suitable for Wikipedia, see WP:CRYSTAL. D.Lazard (talk) 08:33, 19 October 2019 (UTC)[reply]
OK. Bubba73 You talkin' to me? 14:40, 19 October 2019 (UTC)[reply]
If your main concern is the hedge in the Wired article, here's an article with no such hedge (and there are several others). If you're looking for an expert, here's one. I'm no wikilawyer, but my impression is not that peer-review is usually the sole standard for a source to be considered credible - certainly nothing in WP:CRYSTAL seems to me like it applies here. Is there a more relevant wikipedia policy that you meant to point to?

On the other hand, the fastest known multiplication algorithm from this article is a big thing to omit from this article. Would it be appropriate to include a note in the article that the nlogn has been claimed, but not verified by peer-review? 100.15.149.172 (talk) 03:49, 21 October 2019 (UTC)[reply]

https://dl.acm.org/doi/pdf/10.1145/3371387 is a review in the Communications of the ACM of the article in question with quite extensive praise and analysis. As the main communication medium for the ACM this should be more than adequate peer analysis — Preceding unsigned comment added by Caleb.e.allen (talkcontribs) 21:08, 14 January 2020 (UTC)[reply]

Shor's algorithm

[edit]

This cannot be realized on a Turing machine in polynomial time. — Preceding unsigned comment added by 2001:67C:10EC:5747:8000:0:0:710 (talk) 12:14, 6 July 2019 (UTC)[reply]

Good catch. This is the complexity over a quantum computer. I have fixed it. D.Lazard (talk) 15:42, 6 July 2019 (UTC)[reply]
Sorry about that; my mistake. Forgot to note that it was a quantum computer. Thanks for editing it. Jamgoodman (talk) 23:41, 3 August 2019 (UTC)[reply]