For a polynomial
p
n
{\displaystyle p_{n}}
of degree less than or equal to n, that interpolates
f
{\displaystyle f}
at the nodes
x
i
{\displaystyle x_{i}}
where
i
=
0
,
1
,
2
,
3
,
⋯
,
n
{\displaystyle i=0,1,2,3,\cdots ,n}
. Let
p
n
+
1
{\displaystyle p_{n+1}}
be the polynomial of degree less than or equal to n+1 that interpolates
f
{\displaystyle f}
at the nodes
x
i
{\displaystyle x_{i}}
where
i
=
0
,
1
,
2
,
3
,
⋯
,
n
,
n
+
1
{\displaystyle i=0,1,2,3,\cdots ,n,n+1}
. Then
p
n
+
1
{\displaystyle p_{n+1}}
is given by:
p
n
+
1
(
x
)
=
p
n
(
x
)
+
a
n
+
1
w
n
(
x
)
{\displaystyle p_{n+1}(x)=p_{n}(x)+a_{n+1}w_{n}(x)}
where
w
n
(
x
)
:=
∏
i
=
0
n
(
x
−
x
i
)
{\textstyle w_{n}(x):=\prod _{i=0}^{n}(x-x_{i})}
also known as Newton basis and
a
n
+
1
:=
f
(
x
n
+
1
)
−
p
n
(
x
n
+
1
)
w
n
(
x
n
+
1
)
{\textstyle a_{n+1}:={f(x_{n+1})-p_{n}(x_{n+1}) \over w_{n}(x_{n+1})}}
.
Proof:
This can be shown for the case where
i
=
0
,
1
,
2
,
3
,
⋯
,
n
{\displaystyle i=0,1,2,3,\cdots ,n}
:
p
n
+
1
(
x
i
)
=
p
n
(
x
i
)
+
a
n
+
1
∏
j
=
0
n
(
x
i
−
x
j
)
=
p
n
(
x
i
)
{\displaystyle p_{n+1}(x_{i})=p_{n}(x_{i})+a_{n+1}\prod _{j=0}^{n}(x_{i}-x_{j})=p_{n}(x_{i})}
and when
i
=
n
+
1
{\displaystyle i=n+1}
:
p
n
+
1
(
x
n
+
1
)
=
p
n
(
x
n
+
1
)
+
f
(
x
n
+
1
)
−
p
n
(
x
n
+
1
)
w
n
(
x
n
+
1
)
w
n
(
x
n
+
1
)
=
f
(
x
n
+
1
)
{\displaystyle p_{n+1}(x_{n+1})=p_{n}(x_{n+1})+{f(x_{n+1})-p_{n}(x_{n+1}) \over w_{n}(x_{n+1})}w_{n}(x_{n+1})=f(x_{n+1})}
By the uniqueness of interpolated polynomials of degree less than
n
+
1
{\displaystyle n+1}
,
p
n
+
1
(
x
)
=
p
n
(
x
)
+
a
n
+
1
w
n
(
x
)
{\textstyle p_{n+1}(x)=p_{n}(x)+a_{n+1}w_{n}(x)}
is the required polynomial interpolation. The function can thus be expressed as:
p
n
(
x
)
=
a
0
+
a
1
(
x
−
x
0
)
+
a
2
(
x
−
x
0
)
(
x
−
x
1
)
+
⋯
+
a
n
(
x
−
x
0
)
⋯
(
x
−
x
n
−
1
)
.
{\textstyle p_{n}(x)=a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})(x-x_{1})+\cdots +a_{n}(x-x_{0})\cdots (x-x_{n-1}).}
Polynomial coefficients [ edit ]
To find
a
i
{\displaystyle a_{i}}
, we have to solve the lower triangular matrix formed by arranging
p
n
(
x
i
)
=
f
(
x
i
)
=
y
i
{\textstyle p_{n}(x_{i})=f(x_{i})=y_{i}}
from above equation in matrix form:
[
1
…
0
1
x
1
−
x
0
1
x
2
−
x
0
(
x
2
−
x
0
)
(
x
2
−
x
1
)
⋮
⋮
⋮
⋱
1
x
k
−
x
0
…
…
∏
j
=
0
n
−
1
(
x
n
−
x
j
)
]
[
a
0
⋮
a
n
]
=
[
y
0
⋮
y
n
]
{\displaystyle {\begin{bmatrix}1&&\ldots &&0\\1&x_{1}-x_{0}&&&\\1&x_{2}-x_{0}&(x_{2}-x_{0})(x_{2}-x_{1})&&\vdots \\\vdots &\vdots &&\ddots &\\1&x_{k}-x_{0}&\ldots &\ldots &\prod _{j=0}^{n-1}(x_{n}-x_{j})\end{bmatrix}}{\begin{bmatrix}a_{0}\\\\\vdots \\\\a_{n}\end{bmatrix}}={\begin{bmatrix}y_{0}\\\\\vdots \\\\y_{n}\end{bmatrix}}}
The coefficients are derived as
a
j
:=
[
y
0
,
…
,
y
j
]
{\displaystyle a_{j}:=[y_{0},\ldots ,y_{j}]}
where
[
y
0
,
…
,
y
j
]
{\displaystyle [y_{0},\ldots ,y_{j}]}
is the notation for divided differences . Thus, Newton polynomials are used to provide a polynomial interpolation formula of n points.[ 1]
Proof
The first few coefficients can be calculated using the system of equations. The form of n-th coefficient is assumed for proof by mathematical induction.
a
0
=
y
0
=
[
y
0
]
a
1
=
y
1
−
y
0
x
1
−
x
0
=
[
y
0
,
y
1
]
⋮
a
n
=
[
y
0
,
⋯
,
y
n
]
(let)
{\displaystyle {\begin{aligned}a_{0}&=y_{0}=[y_{0}]\\a_{1}&={y_{1}-y_{0} \over x_{1}-x_{0}}=[y_{0},y_{1}]\\\vdots \\a_{n}&=[y_{0},\cdots ,y_{n}]\quad {\text{(let)}}\\\end{aligned}}}
Let Q be polynomial interpolation of points
(
x
1
,
y
1
)
,
…
,
(
x
n
,
y
n
)
{\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})}
. Adding
(
x
0
,
y
0
)
{\displaystyle (x_{0},y_{0})}
to the polynomial Q:
Q
(
x
)
+
a
n
′
(
x
−
x
1
)
⋅
…
⋅
(
x
−
x
n
)
=
P
n
(
x
)
,
{\displaystyle Q(x)+a'_{n}(x-x_{1})\cdot \ldots \cdot (x-x_{n})=P_{n}(x),}
where
a
n
′
(
x
0
−
x
1
)
…
(
x
0
−
x
n
)
=
y
0
−
Q
(
x
0
)
{\textstyle a'_{n}(x_{0}-x_{1})\ldots (x_{0}-x_{n})=y_{0}-Q(x_{0})}
. By uniqueness of the interpolating polynomial of the points
(
x
0
,
y
0
)
,
…
,
(
x
n
,
y
n
)
{\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})}
, equating the coefficients of
x
n
−
1
{\displaystyle x^{n-1}}
we get,
a
n
′
=
[
y
0
,
…
,
y
n
]
{\textstyle a'_{n}=[y_{0},\ldots ,y_{n}]}
.
Hence the polynomial can be expressed as:
P
n
(
x
)
=
Q
(
x
)
+
[
y
0
,
…
,
y
n
]
(
x
−
x
1
)
⋅
…
⋅
(
x
−
x
n
)
.
{\displaystyle P_{n}(x)=Q(x)+[y_{0},\ldots ,y_{n}](x-x_{1})\cdot \ldots \cdot (x-x_{n}).}
Adding
(
x
n
+
1
,
y
n
+
1
)
{\displaystyle (x_{n+1},y_{n+1})}
to the polynomial Q, it has to satisfiy:
[
y
1
,
…
,
y
n
+
1
]
(
x
n
+
1
−
x
1
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
=
y
n
+
1
−
Q
(
x
n
+
1
)
{\textstyle [y_{1},\ldots ,y_{n+1}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})=y_{n+1}-Q(x_{n+1})}
where the formula for
a
n
{\textstyle a_{n}}
and interpolating polynomial are used.
The
a
n
+
1
{\textstyle a_{n+1}}
term for the polynomial
P
n
+
1
{\textstyle P_{n+1}}
can be found by calculating:
[
y
0
,
…
,
y
n
+
1
]
(
x
n
+
1
−
x
0
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
=
[
y
1
,
…
,
y
n
+
1
]
−
[
y
0
,
…
,
y
n
]
x
n
+
1
−
x
0
(
x
n
+
1
−
x
0
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
=
(
[
y
1
,
…
,
y
n
+
1
]
−
[
y
0
,
…
,
y
n
]
)
(
x
n
+
1
−
x
1
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
=
[
y
1
,
…
,
y
n
+
1
]
(
x
n
+
1
−
x
1
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
−
[
y
0
,
…
,
y
n
]
(
x
n
+
1
−
x
1
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
=
(
y
n
+
1
−
Q
(
x
n
+
1
)
)
−
[
y
0
,
…
,
y
n
]
(
x
n
+
1
−
x
1
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
=
y
n
+
1
−
(
Q
(
x
n
+
1
)
+
[
y
0
,
…
,
y
n
]
(
x
n
+
1
−
x
1
)
⋅
…
⋅
(
x
n
+
1
−
x
n
)
)
=
y
n
+
1
−
P
(
x
n
+
1
)
.
{\displaystyle {\begin{aligned}&[y_{0},\ldots ,y_{n+1}](x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&={\frac {[y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]}{x_{n+1}-x_{0}}}(x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=\left([y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]\right)(x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=[y_{1},\ldots ,y_{n+1}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=(y_{n+1}-Q(x_{n+1}))-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=y_{n+1}-(Q(x_{n+1})+[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n}))\\&=y_{n+1}-P(x_{n+1}).\end{aligned}}}
which implies that
a
n
+
1
=
y
n
+
1
−
P
n
(
x
n
+
1
)
w
n
(
x
n
+
1
)
=
[
y
0
,
…
,
y
n
+
1
]
{\displaystyle a_{n+1}={y_{n+1}-P_{n}(x_{n+1}) \over w_{n}(x_{n+1})}=[y_{0},\ldots ,y_{n+1}]}
.
Hence it is proved by principle of mathematical induction.
The Newton polynomial can be expressed in a simplified form when
x
0
,
x
1
,
…
,
x
k
{\displaystyle x_{0},x_{1},\dots ,x_{k}}
are arranged consecutively with equal spacing.
If
x
0
,
x
1
,
…
,
x
k
{\displaystyle x_{0},x_{1},\dots ,x_{k}}
are consecutively arranged and equally spaced with
x
i
=
x
0
+
i
h
{\displaystyle {x}_{i}={x}_{0}+ih}
for i = 0, 1, ..., k and some variable x is expressed as
x
=
x
0
+
s
h
{\displaystyle {x}={x}_{0}+sh}
, then the difference
x
−
x
i
{\displaystyle x-x_{i}}
can be written as
(
s
−
i
)
h
{\displaystyle (s-i)h}
. So the Newton polynomial becomes
N
(
x
)
=
[
y
0
]
+
[
y
0
,
y
1
]
s
h
+
⋯
+
[
y
0
,
…
,
y
k
]
s
(
s
−
1
)
⋯
(
s
−
k
+
1
)
h
k
=
∑
i
=
0
k
s
(
s
−
1
)
⋯
(
s
−
i
+
1
)
h
i
[
y
0
,
…
,
y
i
]
=
∑
i
=
0
k
(
s
i
)
i
!
h
i
[
y
0
,
…
,
y
i
]
.
{\displaystyle {\begin{aligned}N(x)&=[y_{0}]+[y_{0},y_{1}]sh+\cdots +[y_{0},\ldots ,y_{k}]s(s-1)\cdots (s-k+1){h}^{k}\\&=\sum _{i=0}^{k}s(s-1)\cdots (s-i+1){h}^{i}[y_{0},\ldots ,y_{i}]\\&=\sum _{i=0}^{k}{s \choose i}i!{h}^{i}[y_{0},\ldots ,y_{i}].\end{aligned}}}
Since the relationship between divided differences and forward differences is given as:[ 2]
[
y
j
,
y
j
+
1
,
…
,
y
j
+
n
]
=
1
n
!
h
n
Δ
(
n
)
y
j
,
{\displaystyle [y_{j},y_{j+1},\ldots ,y_{j+n}]={\frac {1}{n!h^{n}}}\Delta ^{(n)}y_{j},}
Taking
y
i
=
f
(
x
i
)
{\displaystyle y_{i}=f(x_{i})}
, if the representation of x in the previous sections was instead taken to be
x
=
x
j
+
s
h
{\displaystyle x=x_{j}+sh}
, the Newton forward interpolation formula is expressed as:
f
(
x
)
≈
N
(
x
)
=
N
(
x
j
+
s
h
)
=
∑
i
=
0
k
(
s
i
)
Δ
(
i
)
f
(
x
j
)
{\displaystyle f(x)\approx N(x)=N(x_{j}+sh)=\sum _{i=0}^{k}{s \choose i}\Delta ^{(i)}f(x_{j})}
which is the interpolation of all points after
x
j
{\displaystyle x_{j}}
. It is expanded as:
f
(
x
j
+
s
h
)
=
f
(
x
j
)
+
s
1
!
Δ
f
(
x
j
)
+
s
(
s
−
1
)
2
!
Δ
2
f
(
x
j
)
+
s
(
s
−
1
)
(
s
−
2
)
3
!
Δ
3
f
(
x
j
)
+
s
(
s
−
1
)
(
s
−
2
)
(
s
−
3
)
4
!
Δ
4
f
(
x
j
)
+
⋯
{\displaystyle f(x_{j}+sh)=f(x_{j})+{\frac {s}{1!}}\Delta f(x_{j})+{\frac {s(s-1)}{2!}}\Delta ^{2}f(x_{j})+{\frac {s(s-1)(s-2)}{3!}}\Delta ^{3}f(x_{j})+{\frac {s(s-1)(s-2)(s-3)}{4!}}\Delta ^{4}f(x_{j})+\cdots }
If the nodes are reordered as
x
k
,
x
k
−
1
,
…
,
x
0
{\displaystyle {x}_{k},{x}_{k-1},\dots ,{x}_{0}}
, the Newton polynomial becomes
N
(
x
)
=
[
y
k
]
+
[
y
k
,
y
k
−
1
]
(
x
−
x
k
)
+
⋯
+
[
y
k
,
…
,
y
0
]
(
x
−
x
k
)
(
x
−
x
k
−
1
)
⋯
(
x
−
x
1
)
.
{\displaystyle N(x)=[y_{k}]+[{y}_{k},{y}_{k-1}](x-{x}_{k})+\cdots +[{y}_{k},\ldots ,{y}_{0}](x-{x}_{k})(x-{x}_{k-1})\cdots (x-{x}_{1}).}
If
x
k
,
x
k
−
1
,
…
,
x
0
{\displaystyle {x}_{k},\;{x}_{k-1},\;\dots ,\;{x}_{0}}
are equally spaced with
x
i
=
x
k
−
(
k
−
i
)
h
{\displaystyle {x}_{i}={x}_{k}-(k-i)h}
for i = 0, 1, ..., k and
x
=
x
k
+
s
h
{\displaystyle {x}={x}_{k}+sh}
, then,
N
(
x
)
=
[
y
k
]
+
[
y
k
,
y
k
−
1
]
s
h
+
⋯
+
[
y
k
,
…
,
y
0
]
s
(
s
+
1
)
⋯
(
s
+
k
−
1
)
h
k
=
∑
i
=
0
k
(
−
1
)
i
(
−
s
i
)
i
!
h
i
[
y
k
,
…
,
y
k
−
i
]
.
{\displaystyle {\begin{aligned}N(x)&=[{y}_{k}]+[{y}_{k},{y}_{k-1}]sh+\cdots +[{y}_{k},\ldots ,{y}_{0}]s(s+1)\cdots (s+k-1){h}^{k}\\&=\sum _{i=0}^{k}{(-1)}^{i}{-s \choose i}i!{h}^{i}[{y}_{k},\ldots ,{y}_{k-i}].\end{aligned}}}
Since the relationship between divided differences and backward differences is given as:[citation needed ]
[
y
j
,
y
j
−
1
,
…
,
y
j
−
n
]
=
1
n
!
h
n
∇
(
n
)
y
j
,
{\displaystyle [{y}_{j},y_{j-1},\ldots ,{y}_{j-n}]={\frac {1}{n!h^{n}}}\nabla ^{(n)}y_{j},}
taking
y
i
=
f
(
x
i
)
{\displaystyle y_{i}=f(x_{i})}
, if the representation of x in the previous sections was instead taken to be
x
=
x
j
+
s
h
{\displaystyle x=x_{j}+sh}
, the Newton backward interpolation formula is expressed as:
f
(
x
)
≈
N
(
x
)
=
N
(
x
j
+
s
h
)
=
∑
i
=
0
k
(
−
1
)
i
(
−
s
i
)
∇
(
i
)
f
(
x
j
)
.
{\displaystyle f(x)\approx N(x)=N(x_{j}+sh)=\sum _{i=0}^{k}{(-1)}^{i}{-s \choose i}\nabla ^{(i)}f(x_{j}).}
which is the interpolation of all points before
x
j
{\displaystyle x_{j}}
. It is expanded as:
f
(
x
j
+
s
h
)
=
f
(
x
j
)
+
s
1
!
∇
f
(
x
j
)
+
s
(
s
+
1
)
2
!
∇
2
f
(
x
j
)
+
s
(
s
+
1
)
(
s
+
2
)
3
!
∇
3
f
(
x
j
)
+
s
(
s
+
1
)
(
s
+
2
)
(
s
+
3
)
4
!
∇
4
f
(
x
j
)
+
⋯
{\displaystyle f(x_{j}+sh)=f(x_{j})+{\frac {s}{1!}}\nabla f(x_{j})+{\frac {s(s+1)}{2!}}\nabla ^{2}f(x_{j})+{\frac {s(s+1)(s+2)}{3!}}\nabla ^{3}f(x_{j})+{\frac {s(s+1)(s+2)(s+3)}{4!}}\nabla ^{4}f(x_{j})+\cdots }
A Lozenge diagram is a diagram that is used to describe different interpolation formulas that can be constructed for a given data set. A line starting on the left edge and tracing across the diagram to the right can be used to represent an interpolation formula if the following rules are followed:[ 3]
Lozenge Diagram: geometric representation of polynomial interpolations.
Left to right steps indicate addition whereas right to left steps indicate subtraction
If the slope of a step is positive, the term to be used is the product of the difference and the factor immediately below it. If the slope of a step is negative, the term to be used is the product of the difference and the factor immediately above it.
If a step is horizontal and passes through a factor, use the product of the factor and the average of the two terms immediately above and below it. If a step is horizontal and passes through a difference, use the product of the difference and the average of the two terms immediately above and below it.
The factors are expressed using the formula:
C
(
u
+
k
,
n
)
=
(
u
+
k
)
(
u
+
k
−
1
)
⋯
(
u
+
k
−
n
+
1
)
n
!
{\displaystyle C(u+k,n)={\frac {(u+k)(u+k-1)\cdots (u+k-n+1)}{n!}}}
Proof of equivalence [ edit ]
If a path goes from
Δ
n
−
1
y
s
{\displaystyle \Delta ^{n-1}y_{s}}
to
Δ
n
+
1
y
s
−
1
{\displaystyle \Delta ^{n+1}y_{s-1}}
, it can connect through three intermediate steps, (a) through
Δ
n
y
s
−
1
{\displaystyle \Delta ^{n}y_{s-1}}
, (b) through
C
(
u
−
s
,
n
)
{\textstyle C(u-s,n)}
or (c) through
Δ
n
y
s
{\displaystyle \Delta ^{n}y_{s}}
. Proving the equivalence of these three two-step paths should prove that all (n-step) paths can be morphed with the same starting and ending, all of which represents the same formula.
Path (a):
C
(
u
−
s
,
n
)
Δ
n
y
s
−
1
+
C
(
u
−
s
+
1
,
n
+
1
)
Δ
n
+
1
y
s
−
1
{\displaystyle C(u-s,n)\Delta ^{n}y_{s-1}+C(u-s+1,n+1)\Delta ^{n+1}y_{s-1}}
Path (b):
C
(
u
−
s
,
n
)
Δ
n
y
s
+
C
(
u
−
s
,
n
+
1
)
Δ
n
+
1
y
s
−
1
{\displaystyle C(u-s,n)\Delta ^{n}y_{s}+C(u-s,n+1)\Delta ^{n+1}y_{s-1}}
Path (c):
C
(
u
−
s
,
n
)
Δ
n
y
s
−
1
+
Δ
n
y
s
2
+
C
(
u
−
s
+
1
,
n
+
1
)
+
C
(
u
−
s
,
n
+
1
)
2
Δ
n
+
1
y
s
−
1
{\displaystyle C(u-s,n){\frac {\Delta ^{n}y_{s-1}+\Delta ^{n}y_{s}}{2}}\quad +{\frac {C(u-s+1,n+1)+C(u-s,n+1)}{2}}\Delta ^{n+1}y_{s-1}}
Subtracting contributions from path a and b:
Path a - Path b
=
C
(
u
−
s
,
n
)
(
Δ
n
y
s
−
1
−
Δ
n
y
s
)
+
(
C
(
u
−
s
+
1
,
n
+
1
)
−
C
(
u
−
s
,
n
−
1
)
)
Δ
n
+
1
y
s
−
1
=
−
C
(
u
−
s
,
n
)
Δ
n
+
1
y
s
−
1
+
C
(
u
−
s
,
n
)
(
u
−
s
+
1
)
−
(
u
−
s
−
n
)
n
+
1
Δ
n
+
1
y
s
−
1
=
C
(
u
−
s
,
n
)
(
−
Δ
n
+
1
y
s
−
1
+
Δ
n
+
1
y
s
−
1
)
=
0
{\displaystyle {\begin{aligned}{\text{Path a - Path b}}=&C(u-s,n)(\Delta ^{n}y_{s-1}-\Delta ^{n}y_{s})+(C(u-s+1,n+1)-C(u-s,n-1))\Delta ^{n+1}y_{s-1}\\=&-C(u-s,n)\Delta ^{n+1}y_{s-1}+C(u-s,n){\frac {(u-s+1)-(u-s-n)}{n+1}}\Delta ^{n+1}y_{s-1}\\=&C(u-s,n)(-\Delta ^{n+1}y_{s-1}+\Delta ^{n+1}y_{s-1})=0\\\end{aligned}}}
Thus, the contribution of either path (a) or path (b) is the same. Since path (c) is the average of path (a) and (b), it also contributes identical function to the polynomial. Hence the equivalence of paths with same starting and ending points is shown. To check if the paths can be shifted to different values in the leftmost corner, taking only two step paths is sufficient: (a)
y
s
+
1
{\displaystyle y_{s+1}}
to
y
s
{\displaystyle y_{s}}
through
Δ
y
s
{\displaystyle \Delta y_{s}}
or (b) factor between
y
s
+
1
{\displaystyle y_{s+1}}
and
y
s
{\displaystyle y_{s}}
, to
y
s
{\displaystyle y_{s}}
through
Δ
y
s
{\displaystyle \Delta y_{s}}
or (c) starting from
y
s
{\displaystyle y_{s}}
.
Path (a)
y
s
+
1
+
C
(
u
−
s
−
1
,
1
)
Δ
y
s
−
C
(
u
−
s
,
1
)
Δ
y
s
{\displaystyle y_{s+1}+C(u-s-1,1)\Delta y_{s}-C(u-s,1)\Delta y_{s}}
Path (b)
y
s
+
1
+
y
s
2
+
C
(
u
−
s
−
1
,
1
)
+
C
(
u
−
s
,
1
)
2
Δ
y
s
−
C
(
u
−
s
,
1
)
Δ
y
s
{\displaystyle {\frac {y_{s+1}+y_{s}}{2}}+{\frac {C(u-s-1,1)+C(u-s,1)}{2}}\Delta y_{s}-C(u-s,1)\Delta y_{s}}
Path (c)
y
s
{\displaystyle y_{s}}
Since
Δ
y
s
=
y
s
+
1
−
y
s
{\displaystyle \Delta y_{s}=y_{s+1}-y_{s}}
, substituting in the above equations shows that all the above terms reduce to
y
s
{\displaystyle y_{s}}
and are hence equivalent. Hence these paths can be morphed to start from the leftmost corner and end in a common point.[ 3]
Taking negative slope transversal from
y
0
{\displaystyle y_{0}}
to
Δ
n
y
0
{\displaystyle \Delta ^{n}y_{0}}
gives the interpolation formula of all the
n
+
1
{\displaystyle n+1}
consecutively arranged points, equivalent to Newton's forward interpolation formula:
y
(
s
)
=
y
0
+
C
(
s
,
1
)
Δ
y
0
+
C
(
s
,
2
)
Δ
2
y
0
+
C
(
s
,
3
)
Δ
3
y
0
+
⋯
=
y
0
+
s
Δ
y
0
+
s
(
s
−
1
)
2
Δ
2
y
0
+
s
(
s
−
1
)
(
s
−
2
)
3
!
Δ
3
y
0
+
s
(
s
−
1
)
(
s
−
2
)
(
s
−
3
)
4
!
Δ
4
y
0
+
⋯
{\displaystyle {\begin{aligned}y(s)&=y_{0}+C(s,1)\Delta y_{0}+C(s,2)\Delta ^{2}y_{0}+C(s,3)\Delta ^{3}y_{0}+\cdots \\&=y_{0}+s\Delta y_{0}+{\frac {s(s-1)}{2}}\Delta ^{2}y_{0}+{\frac {s(s-1)(s-2)}{3!}}\Delta ^{3}y_{0}+{\frac {s(s-1)(s-2)(s-3)}{4!}}\Delta ^{4}y_{0}+\cdots \end{aligned}}}
whereas, taking positive slope transversal from
y
n
{\displaystyle y_{n}}
to
∇
n
y
n
=
Δ
n
y
0
{\displaystyle \nabla ^{n}y_{n}=\Delta ^{n}y_{0}}
, gives the interpolation formula of all the
n
+
1
{\displaystyle n+1}
consecutively arranged points, equivalent to Newton's backward interpolation formula:
y
(
u
)
=
y
k
+
C
(
u
−
k
,
1
)
Δ
y
k
−
1
+
C
(
u
−
k
+
1
,
2
)
Δ
2
y
k
−
2
+
C
(
u
−
k
+
2
,
3
)
Δ
3
y
k
−
3
+
⋯
=
y
k
+
(
u
−
k
)
Δ
y
k
−
1
+
(
u
−
k
+
1
)
(
u
−
k
)
2
Δ
2
y
k
−
2
+
(
u
−
k
+
2
)
(
u
−
k
+
1
)
(
u
−
k
)
3
!
Δ
3
y
k
−
3
+
⋯
y
(
k
+
s
)
=
y
k
+
(
s
)
∇
y
k
+
(
s
+
1
)
s
2
∇
2
y
k
+
(
s
+
2
)
(
s
+
1
)
s
3
!
∇
3
y
k
+
(
s
+
3
)
(
s
+
2
)
(
s
+
1
)
s
4
!
∇
4
y
k
+
⋯
{\displaystyle {\begin{aligned}y(u)&=y_{k}+C(u-k,1)\Delta y_{k-1}+C(u-k+1,2)\Delta ^{2}y_{k-2}+C(u-k+2,3)\Delta ^{3}y_{k-3}+\cdots \\&=y_{k}+(u-k)\Delta y_{k-1}+{\frac {(u-k+1)(u-k)}{2}}\Delta ^{2}y_{k-2}+{\frac {(u-k+2)(u-k+1)(u-k)}{3!}}\Delta ^{3}y_{k-3}+\cdots \\y(k+s)&=y_{k}+(s)\nabla y_{k}+{\frac {(s+1)s}{2}}\nabla ^{2}y_{k}+{\frac {(s+2)(s+1)s}{3!}}\nabla ^{3}y_{k}+{\frac {(s+3)(s+2)(s+1)s}{4!}}\nabla ^{4}y_{k}+\cdots \\\end{aligned}}}
where
s
=
u
−
k
{\displaystyle s=u-k}
is the number corresponding to that introduced in Newton interpolation.
Taking a zigzag line towards the right starting from
y
0
{\displaystyle y_{0}}
with negative slope, we get Gauss forward formula:
y
(
u
)
=
y
0
+
u
Δ
y
0
+
u
(
u
−
1
)
2
Δ
2
y
−
1
+
(
u
+
1
)
u
(
u
−
1
)
3
!
Δ
3
y
−
1
+
(
u
+
1
)
u
(
u
−
1
)
(
u
−
2
)
4
!
Δ
4
y
−
2
+
⋯
{\displaystyle y(u)=y_{0}+u\Delta y_{0}+{\frac {u(u-1)}{2}}\Delta ^{2}y_{-1}+{\frac {(u+1)u\left(u-1\right)}{3!}}\Delta ^{3}y_{-1}+{\frac {(u+1)u\left(u-1\right)(u-2)}{4!}}\Delta ^{4}y_{-2}+\cdots }
whereas starting from
y
0
{\displaystyle y_{0}}
with positive slope, we get Gauss backward formula:
y
(
u
)
=
y
0
+
u
Δ
y
−
1
+
(
u
+
1
)
u
2
Δ
2
y
−
1
+
(
u
+
1
)
u
(
u
−
1
)
3
!
Δ
3
y
−
2
+
(
u
+
2
)
(
u
+
1
)
u
(
u
−
1
)
4
!
Δ
4
y
−
2
+
⋯
{\displaystyle y(u)=y_{0}+u\Delta y_{-1}+{\frac {(u+1)u}{2}}\Delta ^{2}y_{-1}+{\frac {(u+1)u\left(u-1\right)}{3!}}\Delta ^{3}y_{-2}+{\frac {(u+2)(u+1)u\left(u-1\right)}{4!}}\Delta ^{4}y_{-2}+\cdots }
By taking a horizontal path towards the right starting from
y
0
{\displaystyle y_{0}}
, we get Stirling formula:
y
(
u
)
=
y
0
+
u
Δ
y
0
+
Δ
y
−
1
2
+
C
(
u
+
1
,
2
)
+
C
(
u
,
2
)
2
Δ
2
y
−
1
+
C
(
u
+
1
,
3
)
Δ
3
y
−
2
+
Δ
3
y
−
1
2
+
⋯
=
y
0
+
u
Δ
y
0
+
Δ
y
−
1
2
+
u
2
2
Δ
2
y
−
1
+
u
(
u
2
−
1
)
3
!
Δ
3
y
−
2
+
Δ
3
y
−
1
2
+
u
2
(
u
2
−
1
)
4
!
Δ
4
y
−
2
+
⋯
{\displaystyle {\begin{aligned}y(u)&=y_{0}+u{\frac {\Delta y_{0}+\Delta y_{-1}}{2}}+{\frac {C(u+1,2)+C(u,2)}{2}}\Delta ^{2}y_{-1}+C(u+1,3){\frac {\Delta ^{3}y_{-2}+\Delta ^{3}y_{-1}}{2}}+\cdots \\&=y_{0}+u{\frac {\Delta y_{0}+\Delta y_{-1}}{2}}+{\frac {u^{2}}{2}}\Delta ^{2}y_{-1}+{\frac {u(u^{2}-1)}{3!}}{\frac {\Delta ^{3}y_{-2}+\Delta ^{3}y_{-1}}{2}}+{\frac {u^{2}(u^{2}-1)}{4!}}\Delta ^{4}y_{-2}+\cdots \end{aligned}}}
Stirling formula is the average of Gauss forward and Gauss backward formulas.
By taking a horizontal path towards the right starting from factor between
y
0
{\displaystyle y_{0}}
and
y
1
{\displaystyle y_{1}}
, we get Stirling formula:
y
(
u
)
=
1
y
0
+
y
1
2
+
C
(
u
,
1
)
+
C
(
u
−
1
,
1
)
2
Δ
y
0
+
C
(
u
,
2
)
Δ
2
y
−
1
+
Δ
2
y
0
2
+
⋯
=
y
0
+
y
1
2
+
(
u
−
1
2
)
Δ
y
0
+
u
(
u
−
1
)
2
Δ
2
y
−
1
+
Δ
2
y
0
2
+
(
u
−
1
2
)
u
(
u
−
1
)
3
!
Δ
3
y
0
+
(
u
+
1
)
u
(
u
−
1
)
(
u
−
2
)
4
!
Δ
4
y
−
1
+
Δ
4
y
−
2
2
+
⋯
{\displaystyle {\begin{aligned}y(u)&=1{\frac {y_{0}+y_{1}}{2}}+{\frac {C(u,1)+C(u-1,1)}{2}}\Delta y_{0}+C(u,2){\frac {\Delta ^{2}y_{-1}+\Delta ^{2}y_{0}}{2}}+\cdots \\&={\frac {y_{0}+y_{1}}{2}}+\left(u-{\frac {1}{2}}\right)\Delta y_{0}+{\frac {u(u-1)}{2}}{\frac {\Delta ^{2}y_{-1}+\Delta ^{2}y_{0}}{2}}+{\frac {\left(u-{\frac {1}{2}}\right)u\left(u-1\right)}{3!}}\Delta ^{3}y_{0}+{\frac {(u+1)u(u-1)(u-2)}{4!}}{\frac {\Delta ^{4}y_{-1}+\Delta ^{4}y_{-2}}{2}}+\cdots \\\end{aligned}}}
^ Cite error: The named reference Epperson 2013
was invoked but never defined (see the help page ).
^ Burden, Richard L.; Faires, J. Douglas (2011). Numerical Analysis (9th ed.). p. 129 . ISBN 9780538733519 .
^ a b Hamming, Richard W. (1986). Numerical methods for scientists and engineers (Unabridged republ. of the 2. ed. (1973) ed.). New York: Dover. ISBN 978-0-486-65241-2 .