Jump to content

Talk:System of polynomial equations

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled

[edit]

This article is now rather complete. Thus comments, improvements, ... are welcome.D.Lazard (talk) 17:22, 11 June 2010 (UTC)[reply]

It would be very good if in this article or elsewhere in Wikipedia somebody describes algorithms mentioned in the article (for example, RUR). --D.M. from Ukraine (talk) 13:30, 14 June 2010 (UTC)[reply]
I totally agree with you. In fact when writing this article I was faced with a difficult problem: Solving polynomial systems is a basic problem which should be described in Wikipedia. But this problem involves many auxiliary algorithms and mathematical theories, which are poorly or not described in Wikipedia. Moreover, the related subjects, like general systems of equations are also not correctly described (for example the article simultaneous equations is restricted to secondary school level and does not describes any modern solving method, even in the linear case). Another example: the article root finding does not mention Uspensky's method which is presently the most efficient method to find the real roots of univariate polynomials (Since release 10, this is the algorithm which is used by function fsolve of Maple). To solve this difficulty, I have chosen to restrict the article to a panorama of the main methods and their domain of usage, leaving all technical details either to other Wikipedia articles, if possible, or to the original papers. It is clear that a lot of work is yet needed to make Wikipedia up to date on this kind of subject.--D.Lazard (talk) 15:39, 14 June 2010 (UTC)[reply]
Except that there is no "Uspenski method", at least according to Akritas, but a Vincent method (1834) that got a convergence proof by Collins/Akritas. The original Vincent method is a continued fraction development of the real roots as a preparation for Newtons method. Uspenskis method is a complicated kind of bisection method, and I would expect that the bisection-exclusion-algorithm of Yakoubsohn would perform better than that. But to return to the main point, it remains true that little of this is represented on WP.--LutzL (talk) 11:30, 17 June 2010 (UTC)--As an addition, the paper of Rouillier and Zimmermann is no improvement, since it also uses the Uspenski-bisection interpretation of the algorithm.--LutzL (talk) 11:36, 17 June 2010 (UTC)[reply]
Here is a paragraph from the introduction of Collins/Akritas paper:The new algorithm to be described in this paper is dedicated to J. V. Uspensky, to whose book, [6], its discovery is directly attributable. Our algorithm is a modification of one which Uspensky (in 1948) claims is most efficient . We quote from page 127 of his book: "The theorem of Rolle and the rule of signs, though they may often help in separating the roots of an equation, do not provide by themselves a complete and exhaustive solution of this important problem. The application of Rolle's theorem for this purpose requires a knowledge of the real roots of the derivative, which is, for the most part, lacking. The rule of signs is a rather weak proposition and, applied to an equation the nature of whose roots is not known, does not give the exact number of positive (or negative) roots except when the number of variations is zero or one. But exactly these two particular cases, when combined with a remarkable theorem published by Vincent in 1836 and hinted at earlier by Fourier, supply the most efficient method, not only to determine the exact number of positive and negative roots but also to effect their separation, in case the proposed equation has no multiple real roots.
It follows that the name Uspensky's algorithm has been given by Collins and Akritas to their own new algorithm. Thus Uspensky's or Vincent's method in the article should (and will) be replaced by Uspensky's algorithm of Collins and Akritas. --D.Lazard (talk) 14:31, 17 June 2010 (UTC)[reply]
Rouillier and Zimmerman have improved this algorithm on two aspects: 1/ The transformations (changes of variable) of the polynomials needed by the algorithm are replaced by substitutions of the form xx+1, xx/2 or x→2x, which are much faster. 2/ During the algorithm one needs to store only two polynomials, the initial and the current ones, which allows to avoid to saturate the memory when solving polynomials of high degree.--D.Lazard (talk) 14:31, 17 June 2010 (UTC)[reply]
I have never heard of a competitive implementation of Yakoubson algorithm. On the other hand the Maple computation fsolve(randpoly(x,coeffs=rand(-10000..10000), dense,degree=2000));, based on Rouillier/Zimmerman improvement, outputs the result in less than 15 seconds of elapsed time on a laptop with Intel Core2 Duo CPU. I do not know any other root-finder with similar performance.--D.Lazard (talk) 14:31, 17 June 2010 (UTC)[reply]
It's been some time since I read the papers, and perhaps I get the history wrong. From what I understood, Uspensky as well es Rouillier-Zimmermann enclose the real roots by dyadic intervals, whereas the original Vincent method as well as the "propaganda" papers of Akritas compute the continued fraction approximation which has a faster rate of convergence. The main difference to Yakoubsohn would be the method to determine the root radii, he applies some Dandelin-Gräffe iterations to get stronger bounds.--LutzL (talk) 06:28, 18 June 2010 (UTC)[reply]

Wu's Elimination Method

[edit]

Would it be useful to give a link in this article to the Wikipedia article http://en.wikipedia.org/wiki/Wu%27s_method_of_characteristic_set ? (I don't know much about the topic, but I notice that papers about "Wu's Elimination Method" appear in searches about systems of polynomial equations.) 216.223.227.93 (talk) 16:50, 25 January 2012 (UTC)[reply]

You are right, Wu's method of characteristic set is strongly related to this subject. However the output of this algorithm is a set of triangular sets which are not necessarily regular chains. It follows that Wu's algorithm is much less efficient that the algorithms computing regular chains (note [2] of the article) and that the output is less convenient (some triangular sets in the output may have an empty set of zeros). Thus Wu's method is, today, interesting mainly from an historical point of view: It has been at the origin of a lot of research which have lead to the notion of regular chains. A section History would be useful, but most of the historically important method which should appear in it do not have their Wiki page (for example Macaulay's U-resultant). Thus, for the moment the best is to add Wu's method in a Section "see also". D.Lazard (talk) 18:16, 25 January 2012 (UTC)[reply]

Two suggestions

[edit]

I have two suggestions for the article:

1. Maybe the first section, "Examples and extensions", should be moved to later in the article. I think the opening sections should be "Basic properties and definitions" and "What is solving?"

2. I would like to see a chart of the possibilities for the number of solutions. The rows would be "Fewer independent equations than variables", "Same number of independent equations as variables", and "More independent equations than variables". The columns would be "No solution", "One solution", "More than one but finite number of solutions", and "Infinitely many solutions". The cell entries would be "Possible" and "Impossible". 208.50.124.65 (talk) 13:12, 16 June 2014 (UTC)[reply]

For the firsts suggestion, I could agree. However, when I wrote it, I intended to extend it with more elementary examples. As the section is presently, I agree with the suggestion. However, with several elementary examples, it could be better to leave it where it is.
I do not understand well the second suggestion: The content of such a chart should be exactly the same as the content of section "Basic properties and definitions". However, I do not see how to manage the cases in a two dimensional chart. Beside the fact of "independent equations" is not clear in this case (removing an equation that is algebraically dependent to the others may increase the number of solutions), the chart would not be useful. In fact "Possible", "Impossible" (or better "inconsistent" or "consistent"), and "finite number of solutions" (that is "zero-dimensional") are almost as difficult to decide as solving the system. Nothing may be deduced from the number of equations and variables than that is already in the section. D.Lazard (talk) 14:12, 16 June 2014 (UTC)[reply]
I don't think the article answers this: If the system is (a) exactly determined, or (b) overdetermined, could there be infinitely many solutions? Clearly the answer is yes if all the equations are the same as each other, but what if none is implied by the others? 208.50.124.65 (talk) 21:47, 16 June 2014 (UTC)[reply]
It is exactly what I meant when suggesting more examples. Examples are needed, for example, of overdetermined systems such that none of the equations is a consequence of the others (that is, if one remove any equation, one gets more solutions) and which have infinitely many solutions. A simple example of an overdetermined system of 3 equations in 2 unknowns, such that one may not remove any equation without changing the set of solutions is (there are three solutions, and if one removes any equation, one gets infinitely many solutions). D.Lazard (talk) 23:02, 16 June 2014 (UTC)[reply]

Rational vs. real

[edit]

Would be good to explain why rational is used in the definition instead of real. All the best: Rich Farmbrough, 06:31, 18 April 2017 (UTC).[reply]

"Rational" is not used in the definition. It is said "Usually, the field k is either the field of rational numbers or a finite field, although most of the theory applies to any field. Thus, the true question is "why authors usually work with rational coefficients?". I can try to answer here, but the lack of source answering this question makes difficult to answer to the question in the article, at least in the lead.
The main reason is that it is impossible to compute with real numbers, as they do not have a finite representation. You should not forget that floating-point numbers, which are often called "reals", are in fact rational numbers that are used for approximating some real number (generally not exactly known.
A second reason is that the problem is naturally highly unstable, and this is independent of the used algorithm (see Wilkinson's polynomial for a very simple case). So, when the input have approximate coefficients, it is often better to approximate them by rational numbers, to proceed on them with exact computation, and, at the end to round the rationals numbers that have been obtained to the closest floating points number. All of this with the hope that approximating the input coefficients does not change the nature and the number of the solutions.
One may compute exactlylly with algebraic numbers, but this is to slow for being useful, and this is more efficient to add (as further equations of the system) the minimal polynomials of the input algebraic numbers, for reducing the equations to equations with rational coefficients (see section "Coefficients in a number field or in a finite field with non-prime order"). D.Lazard (talk) 14:33, 18 April 2017 (UTC)[reply]

Nonlinear system of equations

[edit]

Why does nonlinear system of equations redirect to this article instead of nonlinear system? Jarble (talk) 03:08, 23 July 2017 (UTC)[reply]

Good point. Moreover System of linear equations redirects to Nonlinear system. As this article is a main article for the second section of Nonlinear system, I think that nobody will object to change the target of the redirect. Please, be bold and do it. D.Lazard (talk) 07:26, 23 July 2017 (UTC)[reply]

Positive-dimensional system of polynomial equations

[edit]

System of polynomial equations#What is solving? says

If the system is positive-dimensional, it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract".

If the multivariate polynomials in the equations share a common factor, then equating that factor to 0 gives a characterization of solutions of the system. But what if the polynomials in the system are all irreducible – can the system still be positive-dimensional? If so, what is an example? Loraof (talk) 17:37, 1 November 2017 (UTC)[reply]

In any case, the solutions of a polynomial system form an algebraic set. Conversely, and by definition, every algebraic set is the set of solution of a polynomial system. A polynomial system with more indeterminates than equations is positive dimensional if (and only if) it has at least one solution (by Hilbert's Nullstellensatz, there is no solution if and only 1 is in the ideal generated by the polynomials). Thus for having an example, it suffices to take k polynomials in n indeterminates, without constant terms and with kn. If you drop the condition on the constant terms, and you choose the polynomial at random, then, over an infinite field, you get, with probability one, a positive dimensional systems such that all polynomials are irreducible.
More simpler: consider linear polynomials and consider properties of linear systems. D.Lazard (talk) 18:27, 1 November 2017 (UTC)[reply]

Thanks. Actually I was thinking of systems with n = k. Are these always zero-dimensional if the polynomials don’t share a factor? Loraof (talk) 19:57, 1 November 2017 (UTC)[reply]

A simple example is the following: take, as variables, the coefficients of a generic n×n matrix, and, as equations the equality to 0 of the minors of rank n – 1. The solutions of this system are the matrices of rank at most n – 2, and the system is positive dimensional as soon as n > 2. D.Lazard (talk) 22:22, 1 November 2017 (UTC)[reply]

Complexity

[edit]

Can the question of whether or not a given system is inconsistent be answered in polynomial time in the number of variables and in the number of equations? And how about finding a solution in the zero-dimensional case – can it be done in polynomial time? I think these pieces of information would be a good addition to the article. Loraof (talk) 15:53, 5 November 2017 (UTC)[reply]

I agree that this article deserve to have a complexity section. I do not remember why I have not written it when I created this article. Probably because, everything that can be said on the complexity of the zero-dimensional case involves results and theories that are far more technical than this article. Here is a summary of the main results:
  • The complexity, in this area, is generally expressed in terms of the maximal degree d of the input equations and the number n of the indeterminates. The role of the other parameters (the number of equations and the size of the coefficients), is minor and disappear in the constants involved by the big O notation.
  • The best complexity that can be hoped for is that is polynomial in If all input polynomials have the same degree, this means a complexity that is polynomial in the expected number of solutions (expected size of the output). This is a consequence of Bézout's theorem.
  • The complexity may be obtained for zero-dimensional systems that are also zero-dimensional at infinity, and also for ideals that are generated by an almost regular sequence. The only implemented algorithm that has this complexity in these cases is Faugère F5 algorithm, which has still not any publicly available implementation.
  • The test of inconsistency is provided by effective Nullstellensatz, and has a complexity of
  • The complexity of polynomial system solving is strongly related with the complexities of ideal membership problem and Gröbner basis computation, as well as with Castelnuovo–Mumford regularity (this is a rare example of a known theorem that has never been completely published). All have a worst case of but the worst case is extremely rare (see also Algebraic geometry#Gröbner basis). Moreover, the worst case for zero-dimensional systems is that of effective Nullstellensatz.
In general, the problems coming from applications have a much better complexity, and this is why many may be effectively solved. This is a strange experimental result, which is difficult to explain. Probably, application problems have hidden symmetries which make computation easier.
As you can see, the complexity theory of polynomial computation is a wide problem, and writing accurately about it is difficult.

"General numerical algorithms"

[edit]

This section mentions some "general numerical algorithms" that can be used to solve simultaneous equations, but it does not mention any specific algorithm for this purpose. Which "general numerical algorithms" does the section refer to, specifically? Jarble (talk) 20:04, 7 November 2017 (UTC)[reply]

This section was linked to Simultaneous equations, but the target article has changed since the introduction of this section. I have thus changed this link into System of nonlinear equations. The method that are not specific to polynomials should be described there, and this target article deserve to be expanded in that direction. On the other hand, two of these general methods are mentioned in this section (optimization and multivariate Newton). D.Lazard (talk) 21:06, 7 November 2017 (UTC)[reply]

Expansion of article's contents

[edit]

A major rewrite [1] to the article's introduction was reverted due to the introduction having little to do with the article's contents.

I think the article's current contents reflect an important but limited portion of what should be stated in an article entitled "System of polynomial equations." Some of my concerns are already mentioned in the talk page.

For instance, the basic problem of feasibility over various domains is of historical importance and finds recurring application in theoretical computer science. In cases where the ground domain is the integers or the real numbers, stating that feasibility is decided by a Nullstellensatz certificate is incorrect. With emphasis on the potential for solving over different domains, the second and third subsections of the "Examples" section might find a more natural home.

I also feel that there is more to say in the section "What is solving?" For instance, the notion of an "approximate zero" for a square system is a commonly accepted, mathematically rigorous representation for numerical methods.

Lastly, I think the examples section of this article might be improved by providing some more basic examples as well as detailing notable applications to polynomial system solving (say, kinematic synthesis, cryptography, or computer vision.) Tim —Preceding undated comment added 18:13, 14 October 2018 (UTC)[reply]

I agree that the section "Examples and extensions" should be renamed. "Extension" would be a good title, but one should add some introduction saying that these extensions are example of apparently more difficult problems that may be reduced to searching all solutions (in an algebraically closed field) of a system with coefficients in a prime field, and this is important because effective computations are much easier in a prime field.
About basic examples: it is very difficult to find significant examples, as, typically, non-trivial examples cannot be solved by hand-written computation. Do you have some idea?
About approximate zeros, there is no commonly accepted (by specialists) definition: should be it a solution that is close to the exact solution or a solution for which the values of the equations is small? For nonlinear systems this is completely different. Moreover there is a section about numerical solving.
I do not know what do you mean by "feasibility over various domain". Either you mean the domain of the coefficients. I do not know of any theory of polynomial systems over a ring that is not an integral domain, and in this case, w.l.o.g., one may consider that the coefficients belong to the field of fraction of the integral domain. Or you mean the domain where the solutions are searched. If they are searched in the integers or in the rationals, one has a system of Diophantine equations, a much more difficult problem, for which a general algorithm cannot exist. If the solutions are searched in the reals, all the most efficient algorithms proceed by searching complex solutions and then selecting the real ones.
I agree that the lead deserve to be expanded for explaining some of the above comments, but your edits were more confusing than helpful. D.Lazard (talk) 20:45, 14 October 2018 (UTC)[reply]
First off, let me apologize if the edits were premature or confusing. I appreciate the feedback!
I agree that that striking a good balance between nontrivial and demonstrative examples is tough. Some simple examples, however, could be used to demonstrate the basic terminology (overdetermined, nonsingular, ...) One application which already has its own article is the problem of recovering the essential matrix in computer vision.
Regarding approximate zeros, I would first call attention to numerous references related to certified numerical computation. For instance, the "alpha test" proposed by Smale in his work on the complexity of Bezout's theorem gives an effective criterion to decide if a rational approximation lies within a basin of quadratic convergence for Newton's method applied to a square system. Moreover, since certain software provide numerical approximations as output, I think some general mention of this under "What is solving" might be appropriate.
By "feasibility over various domains" I meant where solutions are searched. I don't consider Diophantine systems to be outside the scope of this article---a suitable reference to Hilbert's tenth problem may be of interest to a reader curious about polynomial systems. Perhaps more seriously, the problem of deciding feasibility over the reals cannot be accomplished by reduction to the complex case. This is important in applications such as non-negative matrix factorization.Tim talk 07:00, 15 October 2018 (UTC)[reply]
I have rewritten the lead for trying to fix your concerns, and clarifying the scope of the article. In particular, I have added in the lead a paragraph about searching solutions in a specific set, with a link to Diophantine equation. I recall you that an article must focus on what appears in the literature under its title. As I am unable to find (with Google Scholar) any article linking "integer solution" and "polynomial system" this means that the study of integer solutions belongs definitely to Diophantine equation rather than to this article.
I have also extended the definition of "certified approximate solution" to include Smale's definition of "certified". A reference would be useful; be free to add it. D.Lazard (talk) 11:22, 10 November 2018 (UTC)[reply]

The example in the section headed "Trigonometric equations" is rather arrogant and unnecessarily unhelpful

[edit]

There's an example in the section headed "Trigonometric equations" that surely should help the reader's comprehension, and yet I only understood it by recalling what I could about trig identities and working through (over about 10 lines) on paper that cox(3x) indeed does equal 4cos^3(x) - 3.cos(x). The example is one that arrogantly assumes that the significant leap necessary to go from the initial equation to the second equivalent one is almost obvious. Why muddle what is supposed to be an example of the point being made with the need on the part of the reader to use some non-obvious trig identities to get from cos(3x) to 4cos^3(x) - 3.cos(x)??? Before Wikipedia, old-fashioned encyclopedia would never have used such detached, unnecessarily-difficult-to-follow examples.

I'm not familiar with how to write maths formulae in Wikipedia, otherwise I would either change the example to a simpler one, or at least include a note about how the leap from the first equation to the next one can be made using trig identities by the reader.

86.190.173.175 (talk) 12:56, 24 May 2019 (UTC)[reply]

I have edited the example. I hope this solves your concern. D.Lazard (talk) 15:35, 24 May 2019 (UTC)[reply]

Definition

[edit]

By adding this section, I was trying to provide a simple, accessible introduction that was properly sourced. A lot of this article is aimed more at a professional mathematician than a general reader. I think it is reasonable to expect that the first section be readable by an undergraduate student in a mathematical field, or maybe even a high school student. Also, the content should be properly sourced. Some of the changes are improvements, but many undermine these goals.

  • A lot of jargon has been added, and some of it seems premature, like the extended discussion of rational coefficients, algebraically closed fields, and characteristic zero. And none of the new material has citations.
  • I don't think it is true that coefficients are generally supposed to be integers etc. Maybe in some areas of pure math, but definitely not in applications.
  • My purpose in adding the figure was to illustrate that solutions can have more than one dimension (and it doesn't hurt to have a pretty picture). If it needs a whole paragraph to describe it, maybe we should use a different figure. Any suggestions?
  • I don't see the point of removing my examples of fields. Not everyone is familiar with the concept, and a reader shouldn't have to keep jumping from this article to others for explanation. However, on reflection, it may be better to focus on polynomials over a particular field and make some simple points like solutions can have more than zero dimensions; there can be solutions at infinity; solutions can be degenerate, etc.

Polynomials are pretty well behaved over the complex numbers, so this would be a good starting point. How about we change the section title to something like Complex polynomials? RockMagnetist(talk) 20:31, 1 May 2020 (UTC)[reply]

About the sources, everything in this section can be sourced from the three books listed in the reference section, except for the example of the Barth surface, for which one can find sources in the linked article. So, sourcing does not require changing the phrasing.
A "simple and accessible introduction" is useful if it is mathematically correct and not confusing. Your version contains several wrong assertions, such as the assertion that a projective space is a field. Presenting the Barth surface as the solution set of a polynomial system is confusing, since this polynomial set is reduced to a single polynomial. Using unneeded terms that are jargon in this context (isolated point) is also confusing. Given here a vague definition of a polynomial is also confusing, as nothing in the article is accessible to a reader who does not know fields and multivariate polynomials.
I agree that the paragraph on the nature of the coefficients was misplaced here, and I have removed it.
It is a common misconception that one can compute with real or complex numbers. This misconception is specially problematic here, as solving polynomial systems is a problem that is highly unstable.
"Polynomials are pretty well behaved over the complex numbers": this is your personal opinion. It may be true in some contexts, but unfortunately not in this context. This is for this reason that it is fundamental to distinguish between the field of the coefficients and the field where the solutions live.
"but definitely not in applications". Can you provide a single example of a polynomial system coming from applications where the coefficients are not integers or in a finite field (or cannot be reduced to integers by the methods described in the next section)? Personally, I am working on this subject for more than 30 years, and I have never encountered a single one. (Note that the case of finite fields is important in cryptography)
The Barth surface and its singularities are very good examples, as it provides a visual hint of the difficulty of the problem and of its high computational complexity. In the encyclopedic presentation of a difficult theory, it is never a good choice to hide the difficulties. This example allows giving an idea of the difficulties without writing down any equation. This is great.
"solutions can have more than zero dimensions; there can be solutions at infinity; solutions can be degenerate, etc.": If you want to explain that, you need to explain the whole algebraic geometry: For the solutions at infinity, you need to introducce the projective space. By "degenerate solutions", I suppose that you mean either multiple solutions or singular solutions. For being correctly defined, both require some non elementary notions of algebraic geometry. D.Lazard (talk) 00:36, 2 May 2020 (UTC)[reply]
It seems you're right about polynomials generally having integer coefficients. It happens that in my own research (which involves modeling ferromagnetism), the coefficients are almost always real numbers, but I couldn't find any other examples. RockMagnetist(talk) 22:09, 4 May 2020 (UTC)[reply]
It's not enough to say that everything can be sourced from one of three books. That puts an unreasonable burden on the reader if they wish to verify a fact. Per WP:PAGENUM, if it's a book then the location in the book (page numbers if available) should generally be specified. RockMagnetist(talk) 22:09, 4 May 2020 (UTC)[reply]
I don't understand your comments about computing with real or complex numbers. Really, all I meant by "Polynomials are pretty well behaved over the complex numbers" was that I agree with your statement in the article: "Searching for the real or rational solutions are much more difficult problems". So complex solutions are a good place to start. RockMagnetist(talk) 22:09, 4 May 2020 (UTC)[reply]
I think my biggest mistake was trying to provide a general definition too quickly. I'm a user of numerical algebraic geometry, not a mathematician, and my knowledge of algebraic geometry is superficial at best. However, I am confident that a lot can be said about the solutions, using examples, before the full machinery is introduced: books like Bates et al. and Morgan (2009) do exactly that. So I'll give it a try and rely on you to catch any errors. RockMagnetist(talk) 22:09, 4 May 2020 (UTC)[reply]
I think my biggest mistake was trying to provide a general definition too quickly. I'm a user of numerical algebraic geometry, not a mathematician, and my knowledge of algebraic geometry is superficial at best. However, I am confident that a lot can be said about the solutions, using examples, before the full machinery is introduced: books like Bates et al. and Morgan (2009) do exactly that. So I'll give it a try and rely on you to catch any errors. RockMagnetist(talk) 22:09, 4 May 2020 (UTC)[reply]
About computation with real numbers and complex numbers: Most real numbers do not have any finite representation; therefore they cannot be written down, and one cannot compute with them. The numbers that are called "real" in programming languages are binary numbers that are a small subset of the real numbers (and even of the rational numbers). The fact that a real number is different from its approximation by a "computer real" is forgotten by many programmers, and has caused many software errors, some with a cost of millions of dollars.
This is sure that many things can be said about the solutions. Most of algebraic geometry consists of saying thing about the solutions. This is not a joke: if algebraic geometry is a so difficult area of mathematics, this is exactly because it is difficult to say non trivial things. Even the meaning of "solving" is not trivial, and this is the reason of a section "What is solving?". So, one must carefully choose the things that are to be said on the solutions.
One of the elementary things that can be said is that when the number of solutions is finite, their number can be large. Your example of singularities of the Barth sextic is excellent for showing that.
Among the apparently simple questions that a reader can be interested in are: testing whether there are solutions, whether their number is finite, and computing the dimension of the set of solutions. This is almost as difficult as solving, as all these problems require generally the computation of a Gröbner basis.
On the other hand, the article could (or should) be expanded by explaining better some features that occur frequently in applied problems. The first feature, is that it is frequent that interesting solutions are those where some variables (or some expressions involving the variables) are nonzero. This is often the case for variables representing distances or other physical quantities. When a solution does not satisfies some nonzero constraints, it is said to be degenerate. The problem with degenerate solutions is that they may make solving much more difficult. For example, it is frequent that a system with finitely many non-degenerate solutions has infinitely many degenerate ones. Thus it is often useful, before any other computation, to use saturation for eliminating degenerate solutions. In practice, it is common that this makes relatively easy a problem that otherwise would not be tractable.
Another aspect that is common to many applied problems occurs for systems depending on parameters that have a finite number of solutions for almost all values of the parameters. It is useful to know how many of these solutions are real, as a function of the parameters. It may occur (there are some examples) that some solutions are real only in a very small region of the paramenter space. This may reveal a unknown physical phenomenon that can hardly be detected by experiments or numerical computation. Thus a section on parametric systems would be useful for practitioners. Writing such a section would be difficult for me, as I am one of the authors of the main article on this subject (WP:COI).
So this article could or should be expanded to many aspects. However, one must keep in mind the natural audience of this article. This is not college students, for which system of equations is more adapted. The natural audience is (or should be) practitioners who have encountered problems that they not know how solving. For them, toy examples are not useful, and may be confusing as hiding the intrinsic difficulty of the problem. D.Lazard (talk) 11:41, 5 May 2020 (UTC)[reply]
D.Lazard and RockMagnetist, thanks for your efforts for maintaining the high quality of this article.

"Other fields of coefficients, such as the real numbers, are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers)."

I'm a little confused by this phrase. While this is certainly true if we are talking about base-2 representation of irrational numbers in floating point, we can still represent them symbolically and do symbolic computations with irrational numbers, right? Saung Tadashi (talk) 23:29, 7 May 2020 (UTC)[reply]
Irrational algebraic numbers can be used in theory, but are rarely used explicitely in practice, as explained in section "Extensions". For transcendental numbers, one must represent them as new indeterminates, and this amounts to work on a field of rational fractions. By clearing denominators, one can again reduce the problem to integer coefficients. By doing this, the inequalities satisfied by irrational numbers are lost. If one needs to consider them, one has a system of equations and inequalities. This is a different theory, which is ouside the scope of this article, and deserves to have a specific article. This a wide subject. You can have a partial view of it by following the links in System of inequalities. It is among my projects to write this article, but there are so many other gaps in WP, .... By the way, Real-root isolation is another, strongly related, subject, and some of the reasons for working only with integer coefficients are explained in Real-root isolation § Specifications and general strategy. D.Lazard (talk) 03:00, 8 May 2020 (UTC)[reply]
@D.Lazard: Which article on system of equations do you have in mind? The link is to a disambiguation page. RockMagnetist(talk) 20:45, 8 May 2020 (UTC)[reply]
Sorry, I have not checked WP before writing this post. System of equations was originally thought as a WP:broad-concept article that should be expanded for including all basic facts that are common to all listed systems of equations. I forgot that this expansion has never been done. D.Lazard (talk) 14:26, 9 May 2020 (UTC)[reply]
I think what we're seeing here is a typical culture clash between pure mathematicians, who want to see the general case laid out in all its rigor and glory; and applied mathematicians or scientists, who have problems to solve but don't have the time to work through a book on algebraic geometry in the hope it might turn out to be useful to them. Both are valid viewpoints. Clearly we need someone like you, D.Lazard, writing for the mathematicians. But as one of the latter, I know the value of "toy examples". They are a good way to start the article and build some intuition before diving into abstractions. Anyway, instead of discussing it further, I'll give it a try when I have time and we'll see whether it works for you. RockMagnetist(talk) 01:02, 9 May 2020 (UTC)[reply]
I disagree that this article has been written for pure mathematicians. It was written for applied mathematicians and scientists who encounter polynomial systems during their work, and do not understand why they have problems with them (I have frequently heard the wrong assertion that "a polynomial system cannot be solved", when the true assertion is that "a polynomial system cannot be solved with usual numerical methods"). By the way, I have read again the beginnning of the article. This led me to make some edits. That is
  • Moving the section "Extensions" after "What is solving?"
  • Fixing grammar errors and phrasing ambiguities that were there for years
  • Moving all mentions of "algebraically closed" into parentheses, and replacing them by "complex"
IMO, this is a starting point in direction of your goal. The next step would be to add examples for every definition given in the section "Basic properties and definitions". Also, it could be better to merge the two sections having "definition" in their headings, or to restructuring them in another way.
Again, the subject is intrinsically difficult, and hiding this difficulty is certainly not a good idea. For example, it is rather intuitive that undetermined systems have infinitely many solutions if they have any. This is intuitive because linear systems have the same property. However, the proof is much more difficult in the polynomial case. The difficulty of the proof must not be hidden, and it is the reason for which I have just added an indication on the tools that are involved. D.Lazard (talk) 16:05, 9 May 2020 (UTC)[reply]