Talk:Taylor series/Archive 2
This is an archive of past discussions about Taylor series. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
Note re GAN nomination
(copied from review page accidentally started by nominator) Jezhotwells (talk) 19:09, 11 April 2011 (UTC) I started this review process as a recent author of the page Taylor's theorem, and naturally checking out the contents of this page. This is my first GAN suggestion and I am not quite familiar with the process outside what was said in Wikipedia:Reviewing good articles. My apologies.
It seems that while not perfect, this article is well above the typical "B class" articles. It has some helpful pictures, while some of them might benefit from editing, but I don't see this as a crucial flaw. The small number of in-line references is probably the biggest flaw. In general the page is very informative, very helpful for students and researchers with a number of explicit Taylor expansions (which I have not verified) and has a good coverage of the relationship of Taylor series to other subjects in mathematical analysis. While it might lead to stepping on thin ice regarding POV, I feel that this trap has been succesfully avoided. Even if this review process is not favourable, I hope it initiates a final thrust for the devouted authors of this page to finish the great job. Lapasotka (talk) 10:17, 11 April 2011 (UTC)
- I have nominated the false start for deletion. Please follow the instructions and don't start the review page when nominating. Thanks. Jezhotwells (talk) 19:09, 11 April 2011 (UTC)
References
I suggest some references:
- MR1411907 Boas, Ralph P. A primer of real functions. Fourth edition. Revised and with a preface by Harold P. Boas. Carus Mathematical Monographs, 13. Mathematical Association of America, Washington, DC, 1996. xiv+305 pp. ISBN: 0-88385-029-X
- MR1916029 Krantz, Steven G.; Parks, Harold R. A primer of real analytic functions.
Second edition. Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks] Birkhäuser Boston, Inc., Boston, MA, 2002. xiv+205 pp. ISBN: 0-8176-4264-1
- MR1234937 Ruiz, Jesús M. The basic theory of power series. Advanced Lectures in Mathematics. Friedr. Vieweg & Sohn, Braunschweig, 1993. x+134 pp. ISBN: 3-528-06525-7
There are also books on analytic geometry that would be relevant:
- MR1760953 de Jong, Theo; Pfister, Gerhard Local analytic geometry. Basic theory and applications. Advanced Lectures in Mathematics. Friedr. Vieweg & Sohn, Braunschweig, 2000. xii+382 pp. ISBN: 3-528-03137-9
- MR1131081 Łojasiewicz, Stanisław Introduction to complex analytic geometry. Translated from the Polish by Maciej Klimek. Birkhäuser Verlag, Basel, 1991. xiv+523 pp. ISBN: 3-7643-1935-6
Kiefer.Wolfowitz (Discussion) 10:11, 13 April 2011 (UTC)
Why does "Series expansion" redirect here?
Currently, Series expansion redirects here. I wonder if this is correct. A Taylor series ist just one example of a series expansion. Others are MacLaurin series, Laurent series, Legendre polynomials, Fourier series, Zernike polynomials and several others.
I would suggest to change Series expansion into a short article defining the term "series expansion" and showing a list od links to articles on al kinds of series.
HHahn (Talk) 12:03, 19 May 2011 (UTC)
- Good idea. Go ahead! Jakob.scholbach (talk) 13:56, 19 May 2011 (UTC)
- Thanks. I did. Please have a look for "englishification" (I am not a native speaker of English). HHahn (Talk) 17:39, 21 May 2011 (UTC)
Too many examples?
I think this page is overburdened by example calculations. Remember that Wikipedia is not supposed to be a textbook. I suggest making the section "List of Maclaurin Serieses" its own page and collecting all other examples in one section, leaving only two or three of them. Lapasotka (talk) 15:35, 24 January 2012 (UTC)
- I agree. Sławomir Biały (talk) 21:03, 24 January 2012 (UTC)
Unheadered junk
In the example section following the "The Maclaurin series for (1 − x)^-1" it is stated "so the Taylor series for x^-1 at a = 1 is". I'm not sure it is obvious why the Taylor series for x^-1 logically follows from the Maclaurin series given. Also, right after that example, how does the integral of 1 = -x? --Knoxjeff (talk) 19:04, 26 March 2012 (UTC)
The Taylor series expansion for arccos is notably missing. I tried simplifying it myself. Maybe someone else can figure it out an add it? --Jlenthe 01:08, 9 October 2006 (UTC)
How did Maclaurin publish his special case of the Taylor theorem in the 17th century (i.e. 1600's) if he was born in 1698? I suspect this is a mistake.
--For the "List of Taylor series" I would like to have the first few terms of each series written out for quick reference. I could doit myself, but I don't want to mess anything up.
- Here's a start: I added the first few terms of tan x. 24.118.99.41 06:58, 24 April 2006 (UTC)
"Note that there are examples of infinitely often differentiable functions f(x) whose Taylor series converge but are not equal to f(x). For instance, all the derivatives of f(x) = exp(-1/x²) are zero at x = 0, so the Taylor series of f(x) is zero, and its radius of convergence is infinite, even though the function most definitely is not zero."
f(x) has no Taylor series for a=0, since f(0) is not defined. You have to state explicitly that you've defined f(x)=exp(-1/x²) for x not equal to 0 and f(0)=0 . This is merely lim[x->0] f(x), but it is a requirement for rigor.
- Don't complain, fix! Wikipedia:Be bold in editing pages. -- Tim Starling 02:03 16 Jun 2003 (UTC)
By the way, would people call a Taylor series? Or does it have a name at all? If someone said something about a Taylor series of a 2D (or n D) function, I'd guess they meant something like that... Also, can the term analytic function refer to a 2D function? Κσυπ Cyp 19:01, 17 Oct 2003 (UTC)
- 1st question: sure, see e.g. http://www.csit.fsu.edu/~erlebach/course/numPDE1_f2001/norms.pdf - Patrick 19:51, 17 Oct 2003 (UTC)
- I just shoved it quickly into the article, at the bottom. Κσυπ Cyp 21:49, 17 Oct 2003 (UTC)
- Does the double sum in 2D form (in that PDF file) mean that I first have to go through the whole range of "r" and then increase "s" by one and yet again go for "r"s? I'm slightly confused (probably of my fault). 83.8.149.147 18:42, 17 January 2007 (UTC)
Shouldn't the article include something about the "Taylor" for whom the series are named? If I knew, I'd do it myself Dukeofomnium 16:41, 5 Mar 2004 (UTC)
- Good idea. Often a good way to start investigating such things is to click the "What links here" link on the article page. In this case, that reveals that the Brook Taylor page links to the article. -- Dominus 19:00, 5 Mar 2004 (UTC)
What is a "formulat"? it's on the last line. A typo or a word I'm unfamiliar with? Goodralph 16:28, 2 Apr 2004 (UTC)
Edited the geometric series to include cases where n might not start from zero. Stealth 17:22, Feb 19, 2005 (UTC)
In the Taylor series formula, what happens if x=a? when n=0 we get 0 raised to the 0th power, which is undefined. The formula is correct if we define 0^0=1.
The Taylor's series is also alternately defiend as follows (I'm using the LaTeX notation here): f(x + h) = f(x) + h f^{\prime}(x) + (h^2/2!) f^{\prime \prime}(x + \theta h) for some 0 < \theta < 1 I'm new to this field, so I'm reading up on this a bit before I can add this to the article with suitable comments, but I didn't find this form mentioned on Mathworld and most pages in the top 10 google hits for Taylor's Series.
how can u use taylor series for intergration? also could sumeone actulally put maclaurin series in it so i can see how much it differs from taylor series
Zeno's Paradox
"The Greek philosopher Zeno considered the problem of summing an infinite series to achieve a finite result, but rejected it as an impossibility: the result was Zeno's paradox."
This is a gross misstatement of Zeno's Paradox: see Atomism and Its Critics (Pyle) for an extensive discussion of this. — Preceding unsigned comment added by GeneCallahan (talk • contribs) 04:12, 20 September 2012 (UTC)
Eponym
From a recent edit summary "The name origin needs to be defined right after the name itself" (as opposed to discussing the eponym in the second sentence, as the origin version of the lead does). I have two issues with this. First, it seems very unlikely that we have any such guideline. If we do, then I would like a link. I have seen many articles that do not do this, so if there is a guideline, it seems to be largely ignored. Secondly, writing is typically clearer when each sentence expresses just one idea. Insisting that we cram all kinds of mandatory information into the first sentence only garbles the text. Sławomir Biały (talk) 21:09, 21 November 2012 (UTC)
- Where the name comes from is important; it is not *more* important than a clear statement of what a Taylor series is. The current formulation is better that the proposed alternative. --JBL (talk) 03:13, 22 November 2012 (UTC)
Taylor Series for arctan(x)
Doesn't the taylor series for arctan(x) converge on the interval [-1,1]. It is listed as converging on the open interval (-1,1). — Preceding unsigned comment added by 70.164.249.253 (talk) 05:26, 21 January 2013 (UTC)
- That's right, I've corrected it. --JBL (talk) 15:37, 21 January 2013 (UTC)
- These series all assume a complex argument, so the disc |x|<1 is correct. One could say also that it converges at all boundary point except . I don't know if it's worth it though. Sławomir Biały (talk) 17:14, 21 January 2013 (UTC)
Euler's Formula
This page would be a good place to derive exp(ix) = cox(x) + i sin(x) by adding their Taylor series, and thus exp(pi i) + 1 = 0, which everyone loves. — Preceding unsigned comment added by 108.39.200.125 (talk) 12:51, 27 April 2013 (UTC)
- Surely a better place to derive such a formula would be at the Euler's formula page. Lo and behold, among the proofs there is the power series one that you suggest. Sławomir Biały (talk) 12:55, 27 April 2013 (UTC)
Small Tip
I'm not a native english speaker, so i could be wrong, but:
"The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable in a neighborhood of a real or complex number a is the power series " Why asking to be in a whole neiborhood, why not just in a point? You don't need absolutely the neiborhood, because you are just doing derivatives in that point, and using limits of other points, but you don't need all the neiborhood to be diferentiable.Sorry for the grammar. — Preceding unsigned comment added by Santropedro1 (talk • contribs) 06:28, 16 June 2013 (UTC)
- If you put into the definition of the Taylor series, you just get that the whole Taylor series is , so there's really no "series" at a single point. Sławomir Biały (talk) 11:59, 16 June 2013 (UTC)
- I see that my reply was not quite the point you were making. You're right that we only need infinite differentiability at in order to define the Taylor series. Sławomir Biały (talk) 13:27, 16 June 2013 (UTC)
2 It's hard to understand this part: By integrating the above Maclaurin series we find the Maclaurin series for log(1 − x), where log denotes the natural logarithm In the beggining of the article, in the Examples part. I don't get it, it should be more clear. — Preceding unsigned comment added by Santropedro1 (talk • contribs) 07:44, 16 June 2013 (UTC)
- Could you be more specific about the difficulty you're having with that statement? Sławomir Biały (talk) 11:59, 16 June 2013 (UTC)
Multi-index notation
I see a lot of people replacing in the multivariable version of the Taylor series with . First, let me say that is obviously correct. All one needs to do is to compute the partial of the Taylor series based at and see that it agrees on the nose with .
I am aware that some textbooks have a in them, and it seems worthwhile to explain why this is also correct and agrees with what we have in the article. In these texts, the nth term of the Taylor series is not expressed with partial derivatives, but instead a term of the form
or any number of other obviously equivalent forms. If you expand these expressions out in partials and then use commutativity of mixed partials, there is a multinomial coefficient of multiplying because it appears this many times in the expansion. Now, observing that
explains why our formula agrees with this alternative one. Sławomir Biały (talk) 21:28, 16 November 2013 (UTC)
Very unclear section
Wikipedia is supposed to be understandable to everybody, even nonmathematicians. Normally when an article talks about numbers, people implicitly assume it's only dealing with real numbers but the section "Analytic functions" suddenly switches topics to complex numbers without indroducing at the beginning that it's doing so and uses the term "open disk" without ever having mentioned complex numbers even once earlier in the article. That section ought to be about real numbers instead and say a function is anylytic at a if there exists a number r>0 such that it's equal to it's taylor series centred at a for all x in the open interval (a - r, a + r). Maybe there could instead be a separate section later in the article about taylor series' of functions of complex numbers.
There's also a possible alternate definition of a function being analytic at a. Maybe the article should just say a function is anylytic at a if and only if there exists a positive number r such that there exists a way to express it as an infinite sum of powers of x - a in the open interval (a - r, a + r). The reason for the alternate definition is that all polynomials that can be expressed as an infinite sum of that form in that interval are already equal to their taylor series in that interval. Blackbombchu (talk) 00:11, 27 November 2013 (UTC)
- Why should it deal with real numbers only? I don't agree with that at all. The article talks about both real and complex numbers. Leaving out complex numbers would be a serious omission. Sławomir Biały (talk) 01:52, 27 November 2013 (UTC)
Taylor series of f
Is it just me or should the taylor series of f be written "T(f)" rather than "T(x)" ....? Fresheneesz 02:26, 6 March 2006 (UTC)
- As a formal object, the Taylor series depends on the function f and the center a, so the notation T(f) would be better, or T(f,a) would be better still. However, on a more concrete level, the Taylor series should be viewed as a function, which I suppose the notation T(x) is meant to indicate. Notation is of course arbitrary. I am actually not aware of any standard notatoin for Taylor series, so I don't know whether there is a good precedent for using this slightly inaccurate notation. -lethe talk + 17:47, 31 March 2006 (UTC)
- How about T(f,a;x) or T(f,a)(x)?130.234.198.85 23:20, 25 January 2007 (UTC)
- As a formal object, the Taylor series depends on the function f and the center a, so the notation T(f) would be better, or T(f,a) would be better still. However, on a more concrete level, the Taylor series should be viewed as a function, which I suppose the notation T(x) is meant to indicate. Notation is of course arbitrary. I am actually not aware of any standard notatoin for Taylor series, so I don't know whether there is a good precedent for using this slightly inaccurate notation. -lethe talk + 17:47, 31 March 2006 (UTC)
- I think the answer to that is no. But! I have another question. Is a Taylor series a special case of power series? Because if so that should be noted in the definition, not as a passing comment. Fresheneesz 11:02, 29 March 2006 (UTC)
By 'special case' of a power series, are you asking if more than one unique power series converges to the function just like the Taylor Series? Because if I remember right, which I dont always do, the Taylor Series is not the only type of power series that can converge to an arbitrary function. 19:34, 17 July 2006 (UTC)
- The Taylor series can be based on the derivatives of the function on any value of X, not just 0 as in the Maclaurin series. Except for trivial case of a constant function, all the series will be different and still represent the same function. -- Petri Krohn 22:48, 31 October 2006 (UTC)
- For the record (this discussion is very old now), a Taylor series does not have to have nonzero radius of convergence, so it does not necessarily define any function outside its center; therefore writing T(x) in the general case is just nonsense. T(f) would be better and T(f,a) to make the center explicit better still. T(f,a)(x) on the other hand again mistakenly suggests a function of x (while it is just a formal power series in x). In fact everything about the Taylor series representing a function (including the opening sentence of this article) is wrong in the general (non-anayltic smooth function) case. Marc van Leeuwen (talk) 07:32, 2 December 2013 (UTC)
Taylor series do not need to represent the (or indeed any) function
I talke issue with the opening sentence of this article "a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point". While there is no doubt that Taylor used the series to represent the original function, and that in many cases representing the function is what one has in mind when forming the Taylor series, it is also a fact that any formal series occurs as the Taylor series of some smooth function R→R (indeed of infinitely many of them). Thus in general there is nothing one can say about the coefficients of a Taylor series, nor therefore about its convergence. I think the article should be honest about this, and not say anything about Taylor series representing functions, unless the context is sufficiently narrow (for instance if one starts from an analytic function) to ensure that this is true. And it would be good to state explicitly that a Taylor series is a formal power series. Marc van Leeuwen (talk) 07:45, 2 December 2013 (UTC)
- I agree with this.
An easy fix is to replace "function" in the first sentence with "analytic function". I'll go ahead and do this.On second thought, part of the lead elaborates on the issue of convergence, so it might be better to remove "representation of a function" in favor of something like "power series". Sławomir Biały (talk) 13:06, 2 December 2013 (UTC)
Much better looking way to write all those mathematical expressions in the Examples section.
was written by writing <math>(x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\frac{1}{4}(x-1)^4+\cdots,\!</math> in the code. It could instead easily be made to look like (x - 1) - 1/2(x - 1)2 + 1/3(x - 1)3 - 1/4(x - 1)4 + …, by writing (x - 1) - {{sfrac|1|2}}(x - 1)<sup>2</sup> + {{sfrac|1|3}}(x - 1)<sup>3</sup> - {{sfrac|1|4}}(x - 1)<sup>4</sup> + …, in the code. The same can be done for all of the expressions in that section except for the last one from sigma notation and the second last one from a combination of a superscript and a subscript. I think those 2 expressions should instead be made to look like log(x0) + 1/x0(x - x0) - 1/(x - 0)2/2 + … and 1 + x1/1! + x2/2! + x3/3! + x4/4! + x5/5! + … = 1 + x + x2/2 + x3/6 + x4/24 + x5/120 + … = xn/n!. I will edit all parts of mathical mathimatical expressions that can be converted from images to text in the article if no one opposes it in the next 5 days. Maybe that will start the wiki evolving to have a source code that can handle even more complex expressions than superscripts, subscripts, and fractions. Blackbombchu (talk) 01:38, 27 November 2013 (UTC)
- Formulas that appear on their own line should normally be displayed using latex rather than HTML. The latex code supports a variety of different output formats and is more easily maintainable than html alternatives. When formulae are inline, then opinions among editors are evenly split between those that prefer latex and those that prefer HTML, but it's considered inappropriate (in the spirit of WP:RETAIN) to change from one style to another in an article. Sławomir Biały (talk) 01:49, 27 November 2013 (UTC)
- Maybe just from the Examples section and from the Example section should be edited into regular text since they're in the middle of a line with regular text. Blackbombchu (talk) 02:21, 27 November 2013 (UTC)
- Yes, I agree with that proposed change. Sławomir Biały (talk) 02:30, 27 November 2013 (UTC)
- Maybe just from the Examples section and from the Example section should be edited into regular text since they're in the middle of a line with regular text. Blackbombchu (talk) 02:21, 27 November 2013 (UTC)
- Basically we keep LaTex for large formulas in the hope that one day the software folks will figure out how to display it in a size compatible with the surrounding text. Apparently it is quite a hard problem. 150.203.160.15 (talk) 03:13, 28 November 2013 (UTC)
- Wolfram MathWorld already did that. Maybe Wikipedia could interact more with Wolfram MathWorld to find out how to solve that problem. Don't tell be this belongs in a proposal. I already suggested in a proposal that Wikipedia can probably do it if Wolfram MathWorld did it. Blackbombchu (talk) 02:54, 3 December 2013 (UTC)
- You can enable MathJax under preferences, but it's only in beta at the moment. Historically, Wikipedia has been rather slow at rolling out new software because of the need to support a large variety of legacy software (*cough* IE 7 *cough*). But if you're bothered by the way latex equations display, then you should enable this option. Sławomir Biały (talk) 16:02, 3 December 2013 (UTC)
- Wolfram MathWorld already did that. Maybe Wikipedia could interact more with Wolfram MathWorld to find out how to solve that problem. Don't tell be this belongs in a proposal. I already suggested in a proposal that Wikipedia can probably do it if Wolfram MathWorld did it. Blackbombchu (talk) 02:54, 3 December 2013 (UTC)
Whole lot of missing information in the Analytic functions section
I'm pretty sure that a real function with domain R can be infinitely differentiable without being analytic but all complex functions that with domain C that are differentiable on C are also infinitely differentiable and analytic on C. For that reason, I think it would be better for that section to define analytic for real functions and also define it the same way for complex functions but discuss the criterion of being analytic for a complex function being pretty much meaningless. I can't think of a proof but maybe someone could try and hunt for that information in a reliable source. I think a complex function can be infinitely differentiable at a point without being analytic at that point but it can't be singly differentiable in an open disc without being analytic in that open disc. Even the function that assigns to a + bi, a + 2bi for all real numbers a and b is not differentiable. I was using the same definition of a derivitive for complex functions as real functions. Blackbombchu (talk) 02:13, 3 December 2013 (UTC)
- In either the real or complex case, infinite differentiability at a point does not guarantee analyticity. The Cauchy-Goursat theorem tells you that if a function is complex differentiable in a whole disc, then it's analytic in the disc, but this is a much stronger condition that smoothness at a single point. Sławomir Biały (talk) 16:05, 3 December 2013 (UTC)
Alternating "Taylor" and "Maclaurin" in the Examples section
Maybe it's just be, but does anyone else find the constantly alternating names for the series irritating?Thomas J. S. Greenfield (talk) 00:30, 30 March 2014 (UTC)
- It's a little jarring, but it seems that the idea is to get both terms into play at the beginning. This might be helpful to some readers (even if I personally would just call everything a "Taylor series"). Sławomir Biały (talk) 12:35, 30 March 2014 (UTC)
Log Base What?
This page uses a logarithm function, but does not give the base of the log. —Preceding unsigned comment added by 72.196.234.57 (talk) 22:37, 18 September 2007 (UTC)
- You're right, that should be mentioned. Thanks, now fixed. -- Jitse Niesen (talk) 01:01, 19 September 2007 (UTC)
- Also, rewritten the natural logarithm as the usual "ln" (as opposed to the former "log" which implies base 10).Gulliveig (talk) 04:09, 16 September 2008 (UTC)
- Not true. That's only by some conventions. Whenever I write 'log', it means the natural log, not the log base 10. You will also find the same convention adopted in most mathematical analysis textbooks. siℓℓy rabbit (talk) 10:58, 16 September 2008 (UTC)
- I agree with Gulliveig. Briancady413 (talk) 18:33, 7 August 2014 (UTC)
Taylor series in several variables
First formula in the above mentioned section in the article - should not there be the factorial of the sum of indexes instead of the product of factorials of particular indexes? — Preceding unsigned comment added by 213.150.1.136 (talk • contribs) 2014-09-11T16:29:33
- No, see above. Sławomir Biały (talk) 16:58, 11 September 2014 (UTC)
- To 213.150.1.136: Perhaps you are confused by thinking that the factorial of the sum is less than the product of the factorials. In fact, the reverse is true: 2!·5!·3!<(2+5+3)! . JRSpriggs (talk) 06:40, 12 September 2014 (UTC)
Suggested summation with 00 undefined
The latter representation having the advantage of not having to define Some indication of the controversy over the definition of can be found at Exponentiation under the heading "History of differing points of view."
--Danchristensen (talk) 04:29, 12 May 2015 (UTC)
- In this setting even if x is zero. See Exponentiation#Zero to the power zero for explanation. On this point, it is important that the article should agree with most sources on the subject. The article does explain what and are. By inventing our own way of writing power series, we would have the unintended effect of making the article more confusing vis a vis most sources, instead of less confusing. So I disagree strongly with the proposed change, unless it can be shown to be a common convention in mathematics sources. Sławomir Biały (talk) 11:17, 12 May 2015 (UTC)
- is usually left undefined on the reals. --Danchristensen (talk) 14:01, 12 May 2015 (UTC)
- That's a common misconception. For power functions associated with integer exponents, exponentiation is defined inductively, by multiplying n times. For x^0, this is an empty product, so equal to unity. It's true that if we are looking at the real exponential, then x^r is defined as . But that actually refers to a different function. For more details, please see the link I provided. It is completely standard in this setting to take 0^0=1. See the references included in the article. Sławomir Biały (talk) 14:37, 12 May 2015 (UTC)
- "A common misconception?" Really? --Danchristensen (talk) 15:50, 12 May 2015 (UTC)
- This discussion page is not a forum for general debate. If you have sources you want us to consider, please present them. Otherwise, I regard this issue as settled, per the sources cited in the article. Sławomir Biały (talk) 16:15, 12 May 2015 (UTC)
- "A common misconception?" Really? --Danchristensen (talk) 15:50, 12 May 2015 (UTC)
After this edit, the proposed text now includes a passage "The latter representation having the advantage of not having to define " According to whom is this an advantage? What secondary sources make this assertion? Sławomir Biały (talk) 11:57, 13 May 2015 (UTC)
- As pointed out in the original article, the summation there depends on defining , a controversial point for many. I present a version that does not depend on any particular value for --Danchristensen (talk) 15:13, 13 May 2015 (UTC)
- You have still not presented any textual evidence that x^0=1 is remotely controversial in the setting of power series and polynomials. And the consensus among editors and sources alike appears to contradict this viewpoint. Sławomir Biały (talk) 16:08, 13 May 2015 (UTC)
- Some indication of the controversy can be found at Exponentiation under the heading "History of differing points of view." --Danchristensen (talk) 17:08, 13 May 2015 (UTC)
- Taylor series and polynomials do not appear to be listed there. Sławomir Biały (talk) 17:20, 13 May 2015 (UTC)
- Some indication of the controversy can be found at Exponentiation under the heading "History of differing points of view." --Danchristensen (talk) 17:08, 13 May 2015 (UTC)
- Key points: "The debate over the definition of has been going on at least since the early 19th century... Some argue that the best value for depends on context, and hence that defining it once and for all is problematic. According to Benson (1999), 'The choice whether to define 0^0 is based on convenience, not on correctness.'... [T]here are textbooks that refrain from defining " --Danchristensen (talk) 17:40, 13 May 2015 (UTC)
- Yes, and abundant reliable sources make the decision in the context of Taylor series and polynomials to define 0^0 = 1, and this article is rightly written to reflect these sources. Whether you happen to think this consensus (of both reliable sources and editors of this page) is morally right or not is totally irrelevant. --JBL (talk) 17:51, 13 May 2015 (UTC)
- Key points: "The debate over the definition of has been going on at least since the early 19th century... Some argue that the best value for depends on context, and hence that defining it once and for all is problematic. According to Benson (1999), 'The choice whether to define 0^0 is based on convenience, not on correctness.'... [T]here are textbooks that refrain from defining " --Danchristensen (talk) 17:40, 13 May 2015 (UTC)
- Morally right??? Come now. As we see in Exponents, there is some controversy -- opposing camps, if you will -- on the matter of This article would be more complete with at least a nod in the direction if not an endorsement of the "other camp" in this case. The summation I suggested is not anything radical. It follows directly from the original summation. It simply does not depend on any particular value for . --Danchristensen (talk) 18:33, 13 May 2015 (UTC)
- This is obviously not a reliable source, and also does not support your view that we should write Taylor series in a nonstandard way. If you do not have a reliable source, there is no point in continuing this conversation. (Actually there is no point whether or not you have a reliable source because it is totally clear that there is not going to be consensus to make the change that you want, but it is double-extra pointless without even a single reliable source to reference.) --JBL (talk) 19:53, 13 May 2015 (UTC)
- P.S. If you want someone to explain "morally right" or why the history section of a different article is irrelevant, please ask on someone's user talk page instead of continuing to extend and multiply these repetitive discussions on article talk pages. --JBL (talk) 19:53, 13 May 2015 (UTC)
- It shows how one author worked around being undefined for a power series -- using a similar idea to that I proposed for Taylor series. See link to textbook at bottom of page. The relevant passage is an excerpt. --Danchristensen (talk) 20:05, 13 May 2015 (UTC)
- Here are some sources that do not make this special distinction for Taylor series: G. H. Hardy, "A course of pure mathematics", Walter Rudin "Principles of mathematical analysis", Robert G. Bartle "Elements of real analysis", Lars Ahlfors "Complex analysis", Antoni Zygmund "Measure and integral", George Polya and Gabor Szego "Problems and theorems in analysis", Erwin Kreyszig "Advanced engineering mathematics", Richard Courant and Fritz John, "Differential and integral calculus", Jerrold Marsden and Alan Weinstein, "Calculus", Serge Lang, "A first course in calculus", Michael Spivak "Calculus", George B. Thomas "Calculus", Kenneth A. Ross "Elementary Analysis: The Theory of Calculus", Elias Stein "Complex analysis". I've only included sources by mathematicians notable enough to have their own Wikipedia page. I assume we should go with the preponderance of sources on this issue, per WP:WEIGHT. Sławomir Biały (talk) 00:34, 14 May 2015 (UTC)
- It would be interesting to hear how they justified their positions, if they did. Was it correctness, or, as Benson (1999) put it, simply convenience. Or doesn't it matter? --Danchristensen (talk) 02:49, 14 May 2015 (UTC)
- It doesn't matter. --JBL (talk) 04:03, 14 May 2015 (UTC)
- Agree, it doesn't matter. We just go by reliable sources, not our own feelings about their correctness. Sławomir Biały (talk) 11:18, 14 May 2015 (UTC)
Nicolas Bourbaki, in "Algèbre", Tome III, writes: "L'unique monôme de degré 0 est l'élément unité de ; on l'identifie souvent à l'élément unité 1 de ". Sławomir Biały (talk) 11:35, 14 May 2015 (UTC)
- Also from Bourbaki Algebra p. 23 (which omits the "often", and deals with exponential notation on monoids very clearly):
- Let E be a monoid written multiplicatively. For n ∈ Z the notation x is replaced by xn. We have the relations
- xm + n = xm.xn
- x0 = 1
- x1 = x
- (xm)n = xmn
- and also (xy)n = xnyn if x and y commute.
- Let E be a monoid written multiplicatively. For n ∈ Z the notation x is replaced by xn. We have the relations
- —Quondum 14:06, 14 May 2015 (UTC)
- (Shouldn't that be x0 = e, where e is the multiplicative identity of E?) Have they not simply defined Note that natural numbers have two identity elements: 0 for addition, 1 for multiplication. --Danchristensen (talk) 17:58, 14 May 2015 (UTC)
- I'm simply quoting exactly from Bourbaki (English translation). They are dealing with a "monoid written multiplicatively", where they seem to prefer denoting the identity as 1. Just before this, they give the additively written version with 0. And before that they use the agnostic notation using the operator ⊤, and there they use the notation e. —Quondum 18:41, 14 May 2015 (UTC)
- The problem with normal exponentiation on N is that you have two inter-related monoids on the same set (a semi-ring?). Powers of the multiplicative identity 1 are not the problem. The problem is with powers of the additive identity 0. It's a completely different structure. --Danchristensen (talk) 21:09, 14 May 2015 (UTC)
- How so? With an operation thought of as addition, we use an additive notation, and change the terminology, as well as the symbols. We could call it "integer scalar multipication" or whatever instead of "exponentiation"; I'd have to see what Bourbaki calls it (ref. not with me at the moment). Instead of xn, we write n.x, meaning x+...+x (n copies of x). The entire theory of exponentiation still applies. —Quondum 22:16, 14 May 2015 (UTC)
- Please: this is not a forum! --JBL (talk) 22:42, 14 May 2015 (UTC)
Confused addition
An editor recently added an explanation " the next term would be smaller still and a negative number, hence the term x^9/9! can be used to approximate the value of the terms left out of the approximation. " for the error estimate in the series of sin(x). This explanation is just nonsense: the next term might be positive or negative (depending on the sign of x), and the sign of that term together with the magnitude of the next term (which might or might not be smaller, depending on the magnitude of x) is simply not enough information to make the desired conclusion, even in the real case. More importantly, it is simply not necessary to justify this claim here, and it distracts from the larger point being made in this section. --JBL (talk) 18:41, 13 July 2015 (UTC)
- As I expected your explanation is very poor, and you do need to provide one if you are going to revert a good edit. Yes, I see that in the particular example how the signs alternate and I also see that each term, in this particular example, is increasingly small. So, what precisely is your objection? I took the original explanation and expanded just a little, saying that further terms are small and the next term in particular is negative, hence the term X^9/9! is a good approximation of the error introduced by the truncation. You need to do a number of things. First you need to learn to read, second you need to learn to respect others edits. If the edits are completely off the mark, the edit should be deleted. If the edit is pretty close, then you should consider editing the edit to improve things just that much more. But if you are of the opinion that if one single thing is wrong with the edit and deleting it is the answer, we could extrapolate that attitude to the whole of Wikipedia and in the end we will have nothing left as Wikipedia is shot full of errors. My edit was not completely off the mark, hence it should be left and possibly improved. Please read the original material and then read my edits. Finally, if you are squatting on this article in the mistaken belief that you should be the arbiter of the "truth", you need to move to one side. I did not start a reversion war, you did. Thank you Zedshort (talk) 19:37, 13 July 2015 (UTC)
- Before I chime in on this, both of you have to stop edit warring over this (and both of you know that). @Zedshort: in particular I think your comment above is unnecessarily confrontational.
- Now, as for the content: it is true that Joel's objections over the sign of the error term are valid. The next term is , which is negative if x is positive but positive if x is negative. Hence it is not prudent to refer to the error being "positive" or negative. In short I agree with Joel on this, although I will say that Zedshort is correct that the next error terms are not bigger in magnitude, because of Taylor's theorem.--Jasper Deng (talk) 19:47, 13 July 2015 (UTC)
- Yes, I see that I assumed the value of x to be positive. But the result is the same if dealing with negative values of x, as the next term is opposite in sign to the X^9/9! , further terms are diminishingly small and hence X^9/9! term provides an upper bound on the error introduced by the truncation. I will not apologize for being direct and to the point with someone regardless of who they are. Zedshort (talk) 20:04, 13 July 2015 (UTC)
- You would want to say then that the next term is opposite in sign or something along those lines. But I don't think it's necessary. Whatever its sign, the validity of the truncation is guaranteed by Taylor's theorem. All the terms of the exponential function's Taylor series are positive for positive x, but that doesn't change anything. In other words, I'd not want to imply to the reader that the sign of the terms have anything to do with it.--Jasper Deng (talk) 20:15, 13 July 2015 (UTC)
- Jasper Deng is right that the correct explanation is by Taylor's theorem. Zedshort's attempted version is not salvageable: in addition to the error about the sign, it is simply not true that the contributions from subsequent terms of the Taylor series get smaller and smaller in absolute value. At x = 12, the term x^9/9! is about 14000 and the term x^11/11! is about 18000. The error of the 7th-order polynomial at x = 12 is about 5000, but the fact that 5000 < 14000 does not follow from anything written by Zedshort.
- Even if the argument weren't wrong in all respects, it is unnecessary where placed and distracts from the point of the section. --JBL (talk) 20:53, 13 July 2015 (UTC)
- That however is incorrect in general. Please see the article on Taylor's theorem. For the series to converge subsequent terms must tend to zero. Therefore I can always find a point at which the error introduced by subsequent terms is less than any positive given error, for a given x. It may not be the 7th-order. It could be higher-order. But at some point, it is true that subsequent terms' contributions tend to zero.--Jasper Deng (talk) 21:01, 13 July 2015 (UTC)
- Yes of course for fixed x they eventually go to zero (and for the sine it is even true that they even eventually go monotonically to zero in absolute value, which need not be true in general) but there is no way to use that to rescue the edits in question. --JBL (talk) 21:15, 13 July 2015 (UTC)
- That however is incorrect in general. Please see the article on Taylor's theorem. For the series to converge subsequent terms must tend to zero. Therefore I can always find a point at which the error introduced by subsequent terms is less than any positive given error, for a given x. It may not be the 7th-order. It could be higher-order. But at some point, it is true that subsequent terms' contributions tend to zero.--Jasper Deng (talk) 21:01, 13 July 2015 (UTC)
- You would want to say then that the next term is opposite in sign or something along those lines. But I don't think it's necessary. Whatever its sign, the validity of the truncation is guaranteed by Taylor's theorem. All the terms of the exponential function's Taylor series are positive for positive x, but that doesn't change anything. In other words, I'd not want to imply to the reader that the sign of the terms have anything to do with it.--Jasper Deng (talk) 20:15, 13 July 2015 (UTC)
- Yes, I see that I assumed the value of x to be positive. But the result is the same if dealing with negative values of x, as the next term is opposite in sign to the X^9/9! , further terms are diminishingly small and hence X^9/9! term provides an upper bound on the error introduced by the truncation. I will not apologize for being direct and to the point with someone regardless of who they are. Zedshort (talk) 20:04, 13 July 2015 (UTC)
I agree with the revert. This edit gives three false impressions: (1) that more terms in the Taylor series always leads to a better approximation, (2) that the error in the Taylor approximation is never greater than the next term of the Taylor series, and (3) that the sign of the next term in the Taylor series is relevant to reckoning the error. (Regarding the second item, in case it is not already clear, Taylor's theorem is what gives the actual form of the error, as well as estimates of it. The fact that is the next term of the Taylor series for sin(x) is only of peripheral relevance.) Reinforcing these misconceptions works against what the section tries to achieve, which is to emphasize the problems that can arise when applying the Taylor approximation outside the interval of convergence. Sławomir Biały (talk) 22:23, 13 July 2015 (UTC)
- I've reverted the edit as it seems that consensus is pretty much against the edit in question.--Jasper Deng (talk) 23:12, 13 July 2015 (UTC)
Multivariate Taylor Series
Why was the section on `multivariate Taylor series' removed by 203.200.95.130? (Compare the version of 17:53, 2006-09-20 vs that of 17:55, 2006-09-20). I am going to add it again, unless someone provides a good reason not to. -- Pouya Tafti 14:32, 5 October 2006 (UTC)
- I agree with Pouya as well! There's no separate article on multivariate Taylor series on wikipedia, so it should be mentioned here.Lavaka 22:22, 17 January 2007 (UTC)
- I have recovered the section titled `Taylor series for several variables' from the edition of 2006-09-20, 17:53. Please check for possible inaccuracies. —Pouya D. Tafti 10:37, 14 March 2007 (UTC)
The notation used in the multivariate series, e.g. fxy is not defined. Ma-Ma-Max Headroom (talk) 08:46, 9 February 2008 (UTC)
Can someone please check that the formula given for the multivariate Taylor series is correct? It doesn't agree with the one given on the Wolfram Mathworld article. Specifically, should in the denominator of the righthand side of the first equation not be ? As an example, consider the Taylor series for centered around . As it is, the formula would imply that the Taylor series would be instead of . Note that the two-variable example given in this same section produces the second (correct, I believe) series, contradicting the general formula at the start of the section. Ben E. Whitney 19:14, 23 July 2015 (UTC)
- It's correct in both. Using your function and the conventions of the article, we have
- Also,
- as required. Sławomir Biały (talk) 21:08, 23 July 2015 (UTC)
- Oh, I see! I think I'd mentally added a factor for the different ways the mixed derivatives could be ordered without realizing it. Should have written it out. Thank you! Ben E. Whitney 15:56, 24 July 2015 (UTC)
- No worries. This seems to be a perennial point of misunderstanding. It might be worthwhile trying to clarify this in the article. Sławomir Biały (talk) 16:02, 24 July 2015 (UTC)
Complete Sets
The Taylor series predates the ideas of complete basis sets, the loose ends of which were not fixed until 1905 with the square integrability condition. The TS is merely a statement of the completeness of the polynomial, where each term of the sum is regarded as an element of a complete (but not necessarily orthonormal) set. If a function f(x) is written as the sum of an orthonormal polynomial set, the nth derivative of f appearing in the TS simply extracts the nth coefficient of the orthonormal sum.220.240.250.7 (talk) 09:39, 27 June 2016 (UTC)
- This is not true. The Weierstrass approximation theorem comes closest to what you are articulating here, which states that continuous functions on compact sets can be uniformly approximated by polynomials, but the polynomials need not be truncations of the Taylor series. Indeed, it is easy to construct examples of functions whose Taylor series does not converge to the function, although these functions will be approximated by other sequences of polynomials. The question of approximating in L^2 is qualitatively very different. There are families of orthogonal polynomials that give series expansions of functions, but in general there is no relationship between the series expansions that one gets in this way and the coefficients of the Taylor series. Sławomir
Biały 00:36, 28 June 2016 (UTC)
Bounding the error
I think "Taylor series" wikipedia page should have a section or link to another wikipedia page, explaining how to bound the error made by the a Taylor polynomial of degree n. — Preceding unsigned comment added by Mredigonda (talk • contribs) 14:08, 19 February 2017 (UTC)
- That is the subject of a different article, Taylor's theorem. You have to follow the link to get to that from the relevant section(s) of this article. Sławomir Biały (talk) 14:25, 19 February 2017 (UTC)
Missing boundary definition
I think it's important to define the initial/boundary values for the generalized binomial coefficients given in the Binomial series section. The current definition only works if . What if this happens as the summation as the power series starts from . Aditya 17:15, 25 April 2017 (UTC) — Preceding unsigned comment added by Aditya8795 (talk • contribs)
- At n=0 we have an empty product, with value 1. There is no problem. --JBL (talk) 01:10, 26 April 2017 (UTC)
High-dimension case
I searched for "Taylor expansion" and landed on this page, but was surprised not to see the higher-dimension cases in the article at all. This is just an idea for improvement - I almost certainly don't have time to update it myself. Banedon (talk) 02:26, 7 August 2017 (UTC)
- There's the section "Taylor series in several variables", which seems to cover that. Or am I missing something myself? --Deacon Vorbis (talk) 03:31, 7 August 2017 (UTC)
- No I think you're correct. My mistake. Banedon (talk) 03:50, 7 August 2017 (UTC)
- I renamed the section since it would've avoided the above. I'm OK with someone reverting it, but would like to see another suggestion to avoid the above in that case. Banedon (talk) 07:11, 7 August 2017 (UTC)
Why is f(x)= forbidden in the initial definition?
I find it unnecessarily difficult to read this way. The comments in the article are very clear that "Any changes to the following formula, without first obtaining consensus on the discussion page will be reverted. In particular, *DO NOT* add f(x)= here.", but there is no explanation on the discussion page on why such a logical edit is so specifically not allowed. So can someone fill me in. — Preceding unsigned comment added by 169.229.204.158 (talk) 15:59, 2 April 2019 (UTC)
- Because the object being defined there is the series, and it is not true in general that the Taylor series is equal to the function. (For example, the series might not converge.) --JBL (talk) 16:01, 2 April 2019 (UTC)
Logarithm notation
@Smaines: I've again reverted your change from to "Standard" doesn't mean that everyone uses it, just that it's in wide use. A short explanatory statement (for those unfamiliar with it) clarifying that the natural log is meant is common and is hardly a reason to change. MOS:STYLERET applies here. –Deacon Vorbis (carbon • videos) 13:26, 29 July 2019 (UTC)
- Part of the issue is "standard for whom?" Speaking from my own experience in the American education system: I think that "log" is used for the common logarithm fairly consistently through about the level of calculus, with "ln" being used for the natural logarithm. In mathematics courses at a higher level, it becomes exceptionally uncommon to use a common logarithm for anything, and usually the natural logarithm is denoted "log". In computer science one sometimes sees "log" being used for the base-2 logarithm. I don't know what is the situation in, say, engineering courses. I don't see any problem with the way it's currently written, and I don't see any problem with replacing "log" with "ln", either. However, if we do use "ln" then I would want to still include the definition of the notation (i.e., "where ln denotes the natural logarithm") on first use, and the swap should be made consistently throughout the article (Smaines's edit missed many lower instances of log). --JBL (talk) 13:56, 29 July 2019 (UTC)
- Fwiw, same on this side of the Atlantic: unqualified log is almost always 10-based, and ln is e-based. I have seldomly come across log10 notation. But it's no big deal. - DVdm (talk) 14:04, 29 July 2019 (UTC)
- Thank you both for the reality check, I was feeling a bit gaslit! I just finished saying very much the same things on Deacon Vorbis's talk page, then moved them here.
- Simply put, I do not believe these reverts reflect a defensible understanding of how mathematics is discussed in writing, and the passage as reverted is harder to understand as a result. Conventionally, the function log x, written without a base, by default is either log10 x or logb x where the value of b does not matter. Also, ln x is always loge x.
- Moreover, in the math under discussion, only ln or loge is valid (as opposed to logb≠e x).
- The break with convention is shown in that your version requires explanation like, "...where log denotes the natural logarithm", rather than the more natural ln (1 - x) or loge (1 - x).
- I will revert this again. It just does not read correctly as is. I agree that it should be continued throughout the article.
- -SM 04:48, 30 July 2019 (UTC)
- Sorry, but I feel like I must have communicated poorly, as two readers seem to have taken my comment as endorsing Smaines's edit. Let me begin by restating my comments about that:
- If we do use "ln" then I would want to still include the definition of the notation (i.e., "where ln denotes the natural logarithm") on first use, and the use should be consistent throughout the article (SM's edit missed many instances of "log" being used for the natural logarithm in the article). For those reasons I have reverted again, and would like to see a clearer consensus.
- Second, to restate my view about the use of log vs ln: it is clearly context-dependent, with different uses in school mathematics, among professional mathematicians, and perhaps among professional users of mathematics in other contexts (CS, engineering, etc.). I think it would be worthwhile to discuss whose conventions are most appropriate for this article, but I have not endorsed any particular choice. (And in particular, I think that both DVdm and SM are wrong to believe that the conventions they are most familiar with are universal.) --JBL (talk) 11:12, 30 July 2019 (UTC)
- Sometimes, threading is hard, and I don't know if I'm replying to any comment specifically here, so whatever . Just to elaborate a bit on my view here: log, ln, and loge are all correct (in practice, I don't think the last of these is used very much, except for emphasis). Anyway, there are articles that use log, and there are articles that use ln. The general principle, per MOS:STYLERET, is that stuff like this generally shouldn't be changed unless there's some good, specific reason to do so. And the sense that I've gotten is that personal preference isn't enough. If you come from an engineering background or an introductory Calculus course, maybe log for the natural logarithm instead of the base-10 logarithm would look a little weird. But if you've come here after a real or complex analysis course, seeing ln again might seem a little weird, too. When notation can vary (as it can here), it's customary to add an explanatory note on the first one or two uses. That's not a reason to disfavor the notation itself. –Deacon Vorbis (carbon • videos) 14:12, 30 July 2019 (UTC)
- So I had a bit of a think about this on the train, and here are some concrete thoughts:
- The current situation is obviously defensible. If all instances of "log" were replaced with "ln", that would also be obviously defensible.
- This article should be written for a broad audience, including (at the low end) students in secondary school mathematics, and (at the high end) professional practitioners of mathematical sciences.
- Professionals in the mathematical sciences can be expected to have more flexibility about notations than students.
- The notation "log" is potentially ambiguous, while the notation "ln" seems to only ever have one meaning.
- A quick survey of a handful of calculus textbooks lying around the department suggests that the majority (but not all) use "ln" for the natural logarithm, particularly newer books.
- Based on this, I support making the blanket change from "log" to "ln" to represent the natural logarithm in this article. Another way to put the analysis is this: if the article were written with "ln", one can be pretty sure that attempts to replace it with "log" would be very rare, but changing "log" to "ln" is moderately common. I also note an earlier talk-page discussion that, while brief, also seems to support the change. --JBL (talk) 15:16, 30 July 2019 (UTC)
- This is maily an article of Mathematics: so I would keep the notation log, as it is customary among mathematicians. In the text it is also named natural logarithm, which leaves no room to ambiguity.pma 12:47, 7 August 2019 (UTC)
- So I had a bit of a think about this on the train, and here are some concrete thoughts:
In my opinion ln is much clearer than log, for example scientific calculators ALWAYS use ln for the natural logarithm and log for the common logarithm (i.e. With base 10). It is true that in Mathematics courses both notations are common, but the page is not written for mathematicians. Taylor series are used as well in chemistry, physics etc... where a clear distinction between natural and common logarithm is needed as they are both of common use. If somebody scrolls the page searching for the natural logarithm series without paying attention to the text the ln notation is the best one. Moreover "ln" means exactly "natural logarithm", it can not be ambiguous. FilBenLeafBoy (Let's Talk!) 11:20, 10 November 2019 (UTC)
First sentence
I rewrote the first sentence as
- "In mathematics, a Taylor series is an expression of a function as an infinite series whose terms are expressed in terms of the values of the function's derivatives at a single point."
This has been reverted to
- "In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point."
with the edit summary
- "I do not understand what the advantage is supposed to be of changing correct but slightly less formal language to more technical language)"
I agree that the replacement of "infinite sum" by "infinite series" makes the sentence slightly more technical. So, I am fine with "infinite sum". On the other hand, I disagree with the use of "representation" and "calculated" that seem improper here. "Calculated from" implies a computation that is not present here. So "expressed in terms of" is more accurate, and, in my opinion, not more technical. Similarly, a "representation" is a way of writing. For example, a numeral is a representation of a number. Here, we are not faced to this meaning of "representation", as the Taylor series is not always equal to the function. For taking these objections into account, I suggest:
In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point.
This is more accurate, and explains implicitly why "representation" may be confusing. D.Lazard (talk) 14:21, 7 May 2020 (UTC)
- Looks perfect. - DVdm (talk) 14:25, 7 May 2020 (UTC)
- As the editor who reverted my edit has been on WP since the last edit in this thread, it seems that there is a consensus on the latter formulation. So I'll moving it to the article. D.Lazard (talk) 09:20, 10 May 2020 (UTC)
Examples Section
I have read and re-read the examples section a few time and would like to edit it but would first like to understand the logic behind its original intention. It throws statements into the narrative that have no bearing on the "examples" and arguably the narrative is a mixture of "example" and "tutorial" with no real coherent structure and presentation.
For example, "The Taylor series for any polynomial is the polynomial itself." This has no place in an example. It should be part of the description
The Maclaurin series for 1/1-x is the geometric series - once again, this is obvious as this is the very core and purpose of the article - so why is this statement being made here If it is to introduce the reader to an example of how to apply the Taylor series to 1/1-x then it should say that -
Then we get to ...so the Taylor series for 1/x a = 1 is ....this is introduced out of the blue and its relationship to 1/(1-x) is what?
I could go on but the examples section should be re-written in a logical manner — Preceding unsigned comment added by Adel314 (talk • contribs) 12:36, 29 September 2020 (UTC)