Probability distribution
In probability theory , the stable count distribution is the conjugate prior of a one-sided stable distribution . This distribution was discovered by Stephen Lihn (Chinese: 藺鴻圖) in his 2017 study of daily distributions of the S&P 500 and the VIX .[ 1] The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution , after Paul Lévy , the first mathematician to have studied it.[ 2]
Of the three parameters defining the distribution, the stability parameter
α
{\displaystyle \alpha }
is most important. Stable count distributions have
0
<
α
<
1
{\displaystyle 0<\alpha <1}
. The known analytical case of
α
=
1
/
2
{\displaystyle \alpha =1/2}
is related to the VIX distribution (See Section 7 of [ 1] ). All the moments are finite for the distribution.
Its standard distribution is defined as
N
α
(
ν
)
=
1
Γ
(
1
α
+
1
)
1
ν
L
α
(
1
ν
)
,
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )={\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}{\frac {1}{\nu }}L_{\alpha }\left({\frac {1}{\nu }}\right),}
where
ν
>
0
{\displaystyle \nu >0}
and
0
<
α
<
1.
{\displaystyle 0<\alpha <1.}
Its location-scale family is defined as
N
α
(
ν
;
ν
0
,
θ
)
=
1
Γ
(
1
α
+
1
)
1
ν
−
ν
0
L
α
(
θ
ν
−
ν
0
)
,
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu ;\nu _{0},\theta )={\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}{\frac {1}{\nu -\nu _{0}}}L_{\alpha }\left({\frac {\theta }{\nu -\nu _{0}}}\right),}
where
ν
>
ν
0
{\displaystyle \nu >\nu _{0}}
,
θ
>
0
{\displaystyle \theta >0}
, and
0
<
α
<
1.
{\displaystyle 0<\alpha <1.}
In the above expression,
L
α
(
x
)
{\displaystyle L_{\alpha }(x)}
is a one-sided stable distribution ,[ 3] which is defined as following.
Let
X
{\displaystyle X}
be a standard stable random variable whose distribution is characterized by
f
(
x
;
α
,
β
,
c
,
μ
)
{\displaystyle f(x;\alpha ,\beta ,c,\mu )}
, then we have
L
α
(
x
)
=
f
(
x
;
α
,
1
,
cos
(
π
α
2
)
1
/
α
,
0
)
,
{\displaystyle L_{\alpha }(x)=f(x;\alpha ,1,\cos \left({\frac {\pi \alpha }{2}}\right)^{1/\alpha },0),}
where
0
<
α
<
1
{\displaystyle 0<\alpha <1}
.
Consider the Lévy sum
Y
=
∑
i
=
1
N
X
i
{\displaystyle Y=\sum _{i=1}^{N}X_{i}}
where
X
i
∼
L
α
(
x
)
{\displaystyle X_{i}\sim L_{\alpha }(x)}
, then
Y
{\displaystyle Y}
has the density
1
ν
L
α
(
x
ν
)
{\textstyle {\frac {1}{\nu }}L_{\alpha }\left({\frac {x}{\nu }}\right)}
where
ν
=
N
1
/
α
{\textstyle \nu =N^{1/\alpha }}
. Set
x
=
1
{\displaystyle x=1}
, we arrive at
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
without the normalization constant.
The reason why this distribution is called "stable count" can be understood by the relation
ν
=
N
1
/
α
{\displaystyle \nu =N^{1/\alpha }}
. Note that
N
{\displaystyle N}
is the "count" of the Lévy sum. Given a fixed
α
{\displaystyle \alpha }
, this distribution gives the probability of taking
N
{\displaystyle N}
steps to travel one unit of distance.
Based on the integral form of
L
α
(
x
)
{\displaystyle L_{\alpha }(x)}
and
q
=
exp
(
−
i
α
π
/
2
)
{\displaystyle q=\exp(-i\alpha \pi /2)}
, we have the integral form of
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
as
N
α
(
ν
)
=
2
π
Γ
(
1
α
+
1
)
∫
0
∞
e
−
Re
(
q
)
t
α
1
ν
sin
(
t
ν
)
sin
(
−
Im
(
q
)
t
α
)
d
t
,
or
=
2
π
Γ
(
1
α
+
1
)
∫
0
∞
e
−
Re
(
q
)
t
α
1
ν
cos
(
t
ν
)
cos
(
Im
(
q
)
t
α
)
d
t
.
{\displaystyle {\begin{aligned}{\mathfrak {N}}_{\alpha }(\nu )&={\frac {2}{\pi \Gamma ({\frac {1}{\alpha }}+1)}}\int _{0}^{\infty }e^{-{\text{Re}}(q)\,t^{\alpha }}{\frac {1}{\nu }}\sin({\frac {t}{\nu }})\sin(-{\text{Im}}(q)\,t^{\alpha })\,dt,{\text{ or }}\\&={\frac {2}{\pi \Gamma ({\frac {1}{\alpha }}+1)}}\int _{0}^{\infty }e^{-{\text{Re}}(q)\,t^{\alpha }}{\frac {1}{\nu }}\cos({\frac {t}{\nu }})\cos({\text{Im}}(q)\,t^{\alpha })\,dt.\\\end{aligned}}}
Based on the double-sine integral above, it leads to the integral form of the standard CDF:
Φ
α
(
x
)
=
2
π
Γ
(
1
α
+
1
)
∫
0
x
∫
0
∞
e
−
Re
(
q
)
t
α
1
ν
sin
(
t
ν
)
sin
(
−
Im
(
q
)
t
α
)
d
t
d
ν
=
1
−
2
π
Γ
(
1
α
+
1
)
∫
0
∞
e
−
Re
(
q
)
t
α
sin
(
−
Im
(
q
)
t
α
)
Si
(
t
x
)
d
t
,
{\displaystyle {\begin{aligned}\Phi _{\alpha }(x)&={\frac {2}{\pi \Gamma ({\frac {1}{\alpha }}+1)}}\int _{0}^{x}\int _{0}^{\infty }e^{-{\text{Re}}(q)\,t^{\alpha }}{\frac {1}{\nu }}\sin({\frac {t}{\nu }})\sin(-{\text{Im}}(q)\,t^{\alpha })\,dt\,d\nu \\&=1-{\frac {2}{\pi \Gamma ({\frac {1}{\alpha }}+1)}}\int _{0}^{\infty }e^{-{\text{Re}}(q)\,t^{\alpha }}\sin(-{\text{Im}}(q)\,t^{\alpha })\,{\text{Si}}({\frac {t}{x}})\,dt,\\\end{aligned}}}
where
Si
(
x
)
=
∫
0
x
sin
(
x
)
x
d
x
{\displaystyle {\text{Si}}(x)=\int _{0}^{x}{\frac {\sin(x)}{x}}\,dx}
is the sine integral function.
The Wright representation [ edit ]
In "Series representation ", it is shown that the stable count distribution is a special case of the Wright function (See Section 4 of [ 4] ):
N
α
(
ν
)
=
1
Γ
(
1
α
+
1
)
W
−
α
,
0
(
−
ν
α
)
,
where
W
λ
,
μ
(
z
)
=
∑
n
=
0
∞
z
n
n
!
Γ
(
λ
n
+
μ
)
.
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )={\frac {1}{\Gamma \left({\frac {1}{\alpha }}+1\right)}}W_{-\alpha ,0}(-\nu ^{\alpha }),\,{\text{where}}\,\,W_{\lambda ,\mu }(z)=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!\,\Gamma (\lambda n+\mu )}}.}
This leads to the Hankel integral: (based on (1.4.3) of [ 5] )
N
α
(
ν
)
=
1
Γ
(
1
α
+
1
)
1
2
π
i
∫
H
a
e
t
−
(
ν
t
)
α
d
t
,
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )={\frac {1}{\Gamma \left({\frac {1}{\alpha }}+1\right)}}{\frac {1}{2\pi i}}\int _{Ha}e^{t-(\nu t)^{\alpha }}\,dt,\,}
where Ha represents a Hankel contour .
Alternative derivation – lambda decomposition[ edit ]
Another approach to derive the stable count distribution is to use the Laplace transform of the one-sided stable distribution, (Section 2.4 of [ 1] )
∫
0
∞
e
−
z
x
L
α
(
x
)
d
x
=
e
−
z
α
,
{\displaystyle \int _{0}^{\infty }e^{-zx}L_{\alpha }(x)\,dx=e^{-z^{\alpha }},}
where
0
<
α
<
1
{\displaystyle 0<\alpha <1}
.
Let
x
=
1
/
ν
{\displaystyle x=1/\nu }
, and one can decompose the integral on the left hand side as a product distribution of a standard Laplace distribution and a standard stable count distribution,
1
2
1
Γ
(
1
α
+
1
)
e
−
|
z
|
α
=
∫
0
∞
1
ν
(
1
2
e
−
|
z
|
/
ν
)
(
1
Γ
(
1
α
+
1
)
1
ν
L
α
(
1
ν
)
)
d
ν
=
∫
0
∞
1
ν
(
1
2
e
−
|
z
|
/
ν
)
N
α
(
ν
)
d
ν
,
{\displaystyle {\frac {1}{2}}{\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}e^{-|z|^{\alpha }}=\int _{0}^{\infty }{\frac {1}{\nu }}\left({\frac {1}{2}}e^{-|z|/\nu }\right)\left({\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}{\frac {1}{\nu }}L_{\alpha }\left({\frac {1}{\nu }}\right)\right)\,d\nu =\int _{0}^{\infty }{\frac {1}{\nu }}\left({\frac {1}{2}}e^{-|z|/\nu }\right){\mathfrak {N}}_{\alpha }(\nu )\,d\nu ,}
where
z
∈
R
{\displaystyle z\in {\mathsf {R}}}
.
This is called the "lambda decomposition" (See Section 4 of [ 1] ) since the LHS was named as "symmetric lambda distribution" in Lihn's former works. However, it has several more popular names such as "exponential power distribution ", or the "generalized error/normal distribution", often referred to when
α
>
1
{\displaystyle \alpha >1}
. It is also the Weibull survival function in Reliability engineering .
Lambda decomposition is the foundation of Lihn's framework of asset returns under the stable law. The LHS is the distribution of asset returns. On the RHS, the Laplace distribution represents the lepkurtotic noise, and the stable count distribution represents the volatility.
Stable Vol distribution [ edit ]
A variant of the stable count distribution is called the stable vol distribution
V
α
(
s
)
{\displaystyle V_{\alpha }(s)}
.
The Laplace transform of
e
−
|
z
|
α
{\displaystyle e^{-|z|^{\alpha }}}
can be re-expressed in terms of a Gaussian mixture of
V
α
(
s
)
{\displaystyle V_{\alpha }(s)}
(See Section 6 of [ 4] ).
It is derived from the lambda decomposition above by a change of variable such that
1
2
1
Γ
(
1
α
+
1
)
e
−
|
z
|
α
=
1
2
1
Γ
(
1
α
+
1
)
e
−
(
z
2
)
α
/
2
=
∫
0
∞
1
s
(
1
2
π
e
−
1
2
(
z
/
s
)
2
)
V
α
(
s
)
d
s
,
{\displaystyle {\frac {1}{2}}{\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}e^{-|z|^{\alpha }}={\frac {1}{2}}{\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}e^{-(z^{2})^{\alpha /2}}=\int _{0}^{\infty }{\frac {1}{s}}\left({\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}(z/s)^{2}}\right)V_{\alpha }(s)\,ds,}
where
V
α
(
s
)
=
2
π
Γ
(
2
α
+
1
)
Γ
(
1
α
+
1
)
N
α
2
(
2
s
2
)
,
0
<
α
≤
2
=
2
π
Γ
(
1
α
+
1
)
W
−
α
2
,
0
(
−
(
2
s
)
α
)
{\displaystyle {\begin{aligned}V_{\alpha }(s)&=\displaystyle {\frac {{\sqrt {2\pi }}\,\Gamma ({\frac {2}{\alpha }}+1)}{\Gamma ({\frac {1}{\alpha }}+1)}}\,{\mathfrak {N}}_{\frac {\alpha }{2}}(2s^{2}),\,\,0<\alpha \leq 2\\&=\displaystyle {\frac {\sqrt {2\pi }}{\Gamma ({\frac {1}{\alpha }}+1)}}\,W_{-{\frac {\alpha }{2}},0}\left(-{({\sqrt {2}}s)}^{\alpha }\right)\end{aligned}}}
This transformation is named generalized Gauss transmutation since it generalizes the Gauss-Laplace transmutation , which is equivalent to
V
1
(
s
)
=
2
2
π
N
1
2
(
2
s
2
)
=
s
e
−
s
2
/
2
{\displaystyle V_{1}(s)=2{\sqrt {2\pi }}\,{\mathfrak {N}}_{\frac {1}{2}}(2s^{2})=s\,e^{-s^{2}/2}}
.
Connection to Gamma and Poisson distributions [ edit ]
The shape parameter of the Gamma and Poisson Distributions is connected to the inverse of Lévy's stability parameter
1
/
α
{\displaystyle 1/\alpha }
.
The upper regularized gamma function
Q
(
s
,
x
)
{\displaystyle Q(s,x)}
can be expressed as an incomplete integral of
e
−
u
α
{\displaystyle e^{-{u^{\alpha }}}}
as
Q
(
1
α
,
z
α
)
=
1
Γ
(
1
α
+
1
)
∫
z
∞
e
−
u
α
d
u
.
{\displaystyle Q({\frac {1}{\alpha }},z^{\alpha })={\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}\displaystyle \int _{z}^{\infty }e^{-{u^{\alpha }}}\,du.}
By replacing
e
−
u
α
{\displaystyle e^{-{u^{\alpha }}}}
with the decomposition and carrying out one integral, we have:
Q
(
1
α
,
z
α
)
=
∫
z
∞
d
u
∫
0
∞
1
ν
(
e
−
u
/
ν
)
N
α
(
ν
)
d
ν
=
∫
0
∞
(
e
−
z
/
ν
)
N
α
(
ν
)
d
ν
.
{\displaystyle Q({\frac {1}{\alpha }},z^{\alpha })=\displaystyle \int _{z}^{\infty }\,du\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\left(e^{-u/\nu }\right)\,{\mathfrak {N}}_{\alpha }\left(\nu \right)\,d\nu =\displaystyle \int _{0}^{\infty }\left(e^{-z/\nu }\right)\,{\mathfrak {N}}_{\alpha }\left(\nu \right)\,d\nu .}
Reverting
(
1
α
,
z
α
)
{\displaystyle ({\frac {1}{\alpha }},z^{\alpha })}
back to
(
s
,
x
)
{\displaystyle (s,x)}
, we arrive at the decomposition of
Q
(
s
,
x
)
{\displaystyle Q(s,x)}
in terms of a stable count:
Q
(
s
,
x
)
=
∫
0
∞
e
(
−
x
s
/
ν
)
N
1
/
s
(
ν
)
d
ν
.
(
s
>
1
)
{\displaystyle Q(s,x)=\displaystyle \int _{0}^{\infty }e^{\left(-{x^{s}}/{\nu }\right)}\,{\mathfrak {N}}_{{1}/{s}}\left(\nu \right)\,d\nu .\,\,(s>1)}
Differentiate
Q
(
s
,
x
)
{\displaystyle Q(s,x)}
by
x
{\displaystyle x}
, we arrive at the desired formula:
1
Γ
(
s
)
x
s
−
1
e
−
x
=
∫
0
∞
1
ν
[
s
x
s
−
1
e
(
−
x
s
/
ν
)
]
N
1
/
s
(
ν
)
d
ν
=
∫
0
∞
1
t
[
s
(
x
t
)
s
−
1
e
−
(
x
/
t
)
s
]
[
N
1
/
s
(
t
s
)
s
t
s
−
1
]
d
t
(
ν
=
t
s
)
=
∫
0
∞
1
t
Weibull
(
x
t
;
s
)
[
N
1
/
s
(
t
s
)
s
t
s
−
1
]
d
t
{\displaystyle {\begin{aligned}{\frac {1}{\Gamma (s)}}x^{s-1}e^{-x}&=\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\left[s\,x^{s-1}e^{\left(-{x^{s}}/{\nu }\right)}\right]\,{\mathfrak {N}}_{{1}/{s}}\left(\nu \right)\,d\nu \\&=\displaystyle \int _{0}^{\infty }{\frac {1}{t}}\left[s\,{\left({\frac {x}{t}}\right)}^{s-1}e^{-{\left(x/t\right)}^{s}}\right]\,\left[{\mathfrak {N}}_{{1}/{s}}\left(t^{s}\right)\,s\,t^{s-1}\right]\,dt\,\,\,(\nu =t^{s})\\&=\displaystyle \int _{0}^{\infty }{\frac {1}{t}}\,{\text{Weibull}}\left({\frac {x}{t}};s\right)\,\left[{\mathfrak {N}}_{{1}/{s}}\left(t^{s}\right)\,s\,t^{s-1}\right]\,dt\end{aligned}}}
This is in the form of a product distribution . The term
[
s
(
x
t
)
s
−
1
e
−
(
x
/
t
)
s
]
{\displaystyle \left[s\,{\left({\frac {x}{t}}\right)}^{s-1}e^{-{\left(x/t\right)}^{s}}\right]}
in the RHS is associated with a Weibull distribution of shape
s
{\displaystyle s}
. Hence, this formula connects the stable count distribution to the probability density function of a Gamma distribution (here ) and the probability mass function of a Poisson distribution (here ,
s
→
s
+
1
{\displaystyle s\rightarrow s+1}
). And the shape parameter
s
{\displaystyle s}
can be regarded as inverse of Lévy's stability parameter
1
/
α
{\displaystyle 1/\alpha }
.
Connection to Chi and Chi-squared distributions [ edit ]
The degrees of freedom
k
{\displaystyle k}
in the chi and chi-squared Distributions can be shown to be related to
2
/
α
{\displaystyle 2/\alpha }
. Hence, the original idea of viewing
λ
=
2
/
α
{\displaystyle \lambda =2/\alpha }
as an integer index in the lambda decomposition is justified here.
For the chi-squared distribution , it is straightforward since the chi-squared distribution is a special case of the gamma distribution , in that
χ
k
2
∼
Gamma
(
k
2
,
θ
=
2
)
{\displaystyle \chi _{k}^{2}\sim {\text{Gamma}}\left({\frac {k}{2}},\theta =2\right)}
. And from above, the shape parameter of a gamma distribution is
1
/
α
{\displaystyle 1/\alpha }
.
For the chi distribution , we begin with its CDF
P
(
k
2
,
x
2
2
)
{\displaystyle P\left({\frac {k}{2}},{\frac {x^{2}}{2}}\right)}
, where
P
(
s
,
x
)
=
1
−
Q
(
s
,
x
)
{\displaystyle P(s,x)=1-Q(s,x)}
. Differentiate
P
(
k
2
,
x
2
2
)
{\displaystyle P\left({\frac {k}{2}},{\frac {x^{2}}{2}}\right)}
by
x
{\displaystyle x}
, we have its density function as
χ
k
(
x
)
=
x
k
−
1
e
−
x
2
/
2
2
k
2
−
1
Γ
(
k
2
)
=
∫
0
∞
1
ν
[
2
−
k
2
k
x
k
−
1
e
(
−
2
−
k
2
x
k
/
ν
)
]
N
2
k
(
ν
)
d
ν
=
∫
0
∞
1
t
[
k
(
x
t
)
k
−
1
e
−
(
x
/
t
)
k
]
[
N
2
k
(
2
−
k
2
t
k
)
2
−
k
2
k
t
k
−
1
]
d
t
,
(
ν
=
2
−
k
2
t
k
)
=
∫
0
∞
1
t
Weibull
(
x
t
;
k
)
[
N
2
k
(
2
−
k
2
t
k
)
2
−
k
2
k
t
k
−
1
]
d
t
{\displaystyle {\begin{aligned}\chi _{k}(x)={\frac {x^{k-1}e^{-x^{2}/2}}{2^{{\frac {k}{2}}-1}\Gamma \left({\frac {k}{2}}\right)}}&=\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\left[2^{-{\frac {k}{2}}}\,k\,x^{k-1}e^{\left(-2^{-{\frac {k}{2}}}\,{x^{k}}/{\nu }\right)}\right]\,{\mathfrak {N}}_{\frac {2}{k}}\left(\nu \right)\,d\nu \\&=\displaystyle \int _{0}^{\infty }{\frac {1}{t}}\left[k\,{\left({\frac {x}{t}}\right)}^{k-1}e^{-{\left(x/t\right)}^{k}}\right]\,\left[{\mathfrak {N}}_{\frac {2}{k}}\left(2^{-{\frac {k}{2}}}t^{k}\right)\,2^{-{\frac {k}{2}}}\,k\,t^{k-1}\right]\,dt,\,\,\,(\nu =2^{-{\frac {k}{2}}}t^{k})\\&=\displaystyle \int _{0}^{\infty }{\frac {1}{t}}\,{\text{Weibull}}\left({\frac {x}{t}};k\right)\,\left[{\mathfrak {N}}_{\frac {2}{k}}\left(2^{-{\frac {k}{2}}}t^{k}\right)\,2^{-{\frac {k}{2}}}\,k\,t^{k-1}\right]\,dt\end{aligned}}}
This formula connects
2
/
k
{\displaystyle 2/k}
with
α
{\displaystyle \alpha }
through the
N
2
k
(
⋅
)
{\displaystyle {\mathfrak {N}}_{\frac {2}{k}}\left(\cdot \right)}
term.
Connection to generalized Gamma distributions [ edit ]
The generalized gamma distribution is a probability distribution with two shape parameters , and is the super set of the gamma distribution , the Weibull distribution , the exponential distribution , and the half-normal distribution . Its CDF is in the form of
P
(
s
,
x
c
)
=
1
−
Q
(
s
,
x
c
)
{\displaystyle P(s,x^{c})=1-Q(s,x^{c})}
.
(Note: We use
s
{\displaystyle s}
instead of
a
{\displaystyle a}
for consistency and to avoid confusion with
α
{\displaystyle \alpha }
.)
Differentiate
P
(
s
,
x
c
)
{\displaystyle P(s,x^{c})}
by
x
{\displaystyle x}
, we arrive at the product-distribution formula:
GenGamma
(
x
;
s
,
c
)
=
∫
0
∞
1
t
Weibull
(
x
t
;
s
c
)
[
N
1
s
(
t
s
c
)
s
c
t
s
c
−
1
]
d
t
(
s
≥
1
)
{\displaystyle {\begin{aligned}{\text{GenGamma}}(x;s,c)&=\displaystyle \int _{0}^{\infty }{\frac {1}{t}}\,{\text{Weibull}}\left({\frac {x}{t}};sc\right)\,\left[{\mathfrak {N}}_{\frac {1}{s}}\left(t^{sc}\right)\,sc\,t^{sc-1}\right]\,dt\,\,(s\geq 1)\end{aligned}}}
where
GenGamma
(
x
;
s
,
c
)
{\displaystyle {\text{GenGamma}}(x;s,c)}
denotes the PDF of a generalized gamma distribution,
whose CDF is parametrized as
P
(
s
,
x
c
)
{\displaystyle P(s,x^{c})}
.
This formula connects
1
/
s
{\displaystyle 1/s}
with
α
{\displaystyle \alpha }
through the
N
1
s
(
⋅
)
{\displaystyle {\mathfrak {N}}_{\frac {1}{s}}\left(\cdot \right)}
term. The
s
c
{\displaystyle sc}
term is an exponent representing the second degree of freedom in the shape-parameter space.
This formula is singular for the case of a Weibull distribution since
s
{\displaystyle s}
must be one for
GenGamma
(
x
;
1
,
c
)
=
Weibull
(
x
;
c
)
{\displaystyle {\text{GenGamma}}(x;1,c)={\text{Weibull}}(x;c)}
;
but for
N
1
s
(
ν
)
{\displaystyle {\mathfrak {N}}_{\frac {1}{s}}\left(\nu \right)}
to exist,
s
{\displaystyle s}
must be greater than one.
When
s
→
1
{\displaystyle s\rightarrow 1}
,
N
1
s
(
ν
)
{\displaystyle {\mathfrak {N}}_{\frac {1}{s}}\left(\nu \right)}
is a delta function and this formula becomes trivial.
The Weibull distribution has its distinct way of decomposition as following.
Connection to Weibull distribution [ edit ]
For a Weibull distribution whose CDF is
F
(
x
;
k
,
λ
)
=
1
−
e
−
(
x
/
λ
)
k
(
x
>
0
)
{\displaystyle F(x;k,\lambda )=1-e^{-(x/\lambda )^{k}}\,\,(x>0)}
, its shape parameter
k
{\displaystyle k}
is equivalent to Lévy's stability parameter
α
{\displaystyle \alpha }
.
A similar expression of product distribution can be derived, such that the kernel is either
a one-sided Laplace distribution
F
(
x
;
1
,
σ
)
{\displaystyle F(x;1,\sigma )}
or a Rayleigh distribution
F
(
x
;
2
,
2
σ
)
{\displaystyle F(x;2,{\sqrt {2}}\sigma )}
.
It begins with the complementary CDF, which comes from Lambda decomposition :
1
−
F
(
x
;
k
,
1
)
=
{
∫
0
∞
1
ν
(
1
−
F
(
x
;
1
,
ν
)
)
[
Γ
(
1
k
+
1
)
N
k
(
ν
)
]
d
ν
,
1
≥
k
>
0
;
or
∫
0
∞
1
s
(
1
−
F
(
x
;
2
,
2
s
)
)
[
2
π
Γ
(
1
k
+
1
)
V
k
(
s
)
]
d
s
,
2
≥
k
>
0.
{\displaystyle 1-F(x;k,1)={\begin{cases}\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\,(1-F(x;1,\nu ))\left[\Gamma \left({\frac {1}{k}}+1\right){\mathfrak {N}}_{k}(\nu )\right]\,d\nu ,&1\geq k>0;{\text{or }}\\\displaystyle \int _{0}^{\infty }{\frac {1}{s}}\,(1-F(x;2,{\sqrt {2}}s))\left[{\sqrt {\frac {2}{\pi }}}\,\Gamma \left({\frac {1}{k}}+1\right)V_{k}(s)\right]\,ds,&2\geq k>0.\end{cases}}}
By taking derivative on
x
{\displaystyle x}
, we obtain the product distribution form of a Weibull distribution PDF
Weibull
(
x
;
k
)
{\displaystyle {\text{Weibull}}(x;k)}
as
Weibull
(
x
;
k
)
=
{
∫
0
∞
1
ν
Laplace
(
x
ν
)
[
Γ
(
1
k
+
1
)
1
ν
N
k
(
ν
)
]
d
ν
,
1
≥
k
>
0
;
or
∫
0
∞
1
s
Rayleigh
(
x
s
)
[
2
π
Γ
(
1
k
+
1
)
1
s
V
k
(
s
)
]
d
s
,
2
≥
k
>
0.
{\displaystyle {\text{Weibull}}(x;k)={\begin{cases}\displaystyle \int _{0}^{\infty }{\frac {1}{\nu }}\,{\text{Laplace}}({\frac {x}{\nu }})\left[\Gamma \left({\frac {1}{k}}+1\right){\frac {1}{\nu }}{\mathfrak {N}}_{k}(\nu )\right]\,d\nu ,&1\geq k>0;{\text{or }}\\\displaystyle \int _{0}^{\infty }{\frac {1}{s}}\,{\text{Rayleigh}}({\frac {x}{s}})\left[{\sqrt {\frac {2}{\pi }}}\,\Gamma \left({\frac {1}{k}}+1\right){\frac {1}{s}}V_{k}(s)\right]\,ds,&2\geq k>0.\end{cases}}}
where
Laplace
(
x
)
=
e
−
x
{\displaystyle {\text{Laplace}}(x)=e^{-x}}
and
Rayleigh
(
x
)
=
x
e
−
x
2
/
2
{\displaystyle {\text{Rayleigh}}(x)=xe^{-x^{2}/2}}
.
it is clear that
k
=
α
{\displaystyle k=\alpha }
from the
N
k
(
ν
)
{\displaystyle {\mathfrak {N}}_{k}(\nu )}
and
V
k
(
s
)
{\displaystyle V_{k}(s)}
terms.
Asymptotic properties [ edit ]
For stable distribution family, it is essential to understand its asymptotic behaviors. From,[ 3] for small
ν
{\displaystyle \nu }
,
N
α
(
ν
)
→
B
(
α
)
ν
α
,
for
ν
→
0
and
B
(
α
)
>
0.
{\displaystyle {\begin{aligned}{\mathfrak {N}}_{\alpha }(\nu )&\rightarrow B(\alpha )\,\nu ^{\alpha },{\text{ for }}\nu \rightarrow 0{\text{ and }}B(\alpha )>0.\\\end{aligned}}}
This confirms
N
α
(
0
)
=
0
{\displaystyle {\mathfrak {N}}_{\alpha }(0)=0}
.
For large
ν
{\displaystyle \nu }
,
N
α
(
ν
)
→
ν
α
2
(
1
−
α
)
e
−
A
(
α
)
ν
α
1
−
α
,
for
ν
→
∞
and
A
(
α
)
>
0.
{\displaystyle {\begin{aligned}{\mathfrak {N}}_{\alpha }(\nu )&\rightarrow \nu ^{\frac {\alpha }{2(1-\alpha )}}e^{-A(\alpha )\,\nu ^{\frac {\alpha }{1-\alpha }}},{\text{ for }}\nu \rightarrow \infty {\text{ and }}A(\alpha )>0.\\\end{aligned}}}
This shows that the tail of
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
decays exponentially at infinity. The larger
α
{\displaystyle \alpha }
is, the stronger the decay.
This tail is in the form of a generalized gamma distribution , where in its
f
(
x
;
a
,
d
,
p
)
{\displaystyle f(x;a,d,p)}
parametrization,
p
=
α
1
−
α
{\displaystyle p={\frac {\alpha }{1-\alpha }}}
,
a
=
A
(
α
)
−
1
/
p
{\displaystyle a=A(\alpha )^{-1/p}}
, and
d
=
1
+
p
2
{\displaystyle d=1+{\frac {p}{2}}}
.
Hence, it is equivalent to
GenGamma
(
x
a
;
s
=
1
α
−
1
2
,
c
=
p
)
{\displaystyle {\text{GenGamma}}({\frac {x}{a}};s={\frac {1}{\alpha }}-{\frac {1}{2}},c=p)}
,
whose CDF is parametrized as
P
(
s
,
(
x
a
)
c
)
{\displaystyle P\left(s,\left({\frac {x}{a}}\right)^{c}\right)}
.
The n -th moment
m
n
{\displaystyle m_{n}}
of
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
is the
−
(
n
+
1
)
{\displaystyle -(n+1)}
-th moment of
L
α
(
x
)
{\displaystyle L_{\alpha }(x)}
. All positive moments are finite. This in a way solves the thorny issue of diverging moments in the stable distribution. (See Section 2.4 of [ 1] )
m
n
=
∫
0
∞
ν
n
N
α
(
ν
)
d
ν
=
1
Γ
(
1
α
+
1
)
∫
0
∞
1
t
n
+
1
L
α
(
t
)
d
t
.
{\displaystyle {\begin{aligned}m_{n}&=\int _{0}^{\infty }\nu ^{n}{\mathfrak {N}}_{\alpha }(\nu )d\nu ={\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}\int _{0}^{\infty }{\frac {1}{t^{n+1}}}L_{\alpha }(t)\,dt.\\\end{aligned}}}
The analytic solution of moments is obtained through the Wright function:
m
n
=
1
Γ
(
1
α
+
1
)
∫
0
∞
ν
n
W
−
α
,
0
(
−
ν
α
)
d
ν
=
Γ
(
n
+
1
α
)
Γ
(
n
+
1
)
Γ
(
1
α
)
,
n
≥
−
1.
{\displaystyle {\begin{aligned}m_{n}&={\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}\int _{0}^{\infty }\nu ^{n}W_{-\alpha ,0}(-\nu ^{\alpha })\,d\nu \\&={\frac {\Gamma ({\frac {n+1}{\alpha }})}{\Gamma (n+1)\Gamma ({\frac {1}{\alpha }})}},\,n\geq -1.\\\end{aligned}}}
where
∫
0
∞
r
δ
W
−
ν
,
μ
(
−
r
)
d
r
=
Γ
(
δ
+
1
)
Γ
(
ν
δ
+
ν
+
μ
)
,
δ
>
−
1
,
0
<
ν
<
1
,
μ
>
0.
{\displaystyle \int _{0}^{\infty }r^{\delta }W_{-\nu ,\mu }(-r)\,dr={\frac {\Gamma (\delta +1)}{\Gamma (\nu \delta +\nu +\mu )}},\,\delta >-1,0<\nu <1,\mu >0.}
(See (1.4.28) of [ 5] )
Thus, the mean of
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
is
m
1
=
Γ
(
2
α
)
Γ
(
1
α
)
{\displaystyle m_{1}={\frac {\Gamma ({\frac {2}{\alpha }})}{\Gamma ({\frac {1}{\alpha }})}}}
The variance is
σ
2
=
Γ
(
3
α
)
2
Γ
(
1
α
)
−
[
Γ
(
2
α
)
Γ
(
1
α
)
]
2
{\displaystyle \sigma ^{2}={\frac {\Gamma ({\frac {3}{\alpha }})}{2\Gamma ({\frac {1}{\alpha }})}}-\left[{\frac {\Gamma ({\frac {2}{\alpha }})}{\Gamma ({\frac {1}{\alpha }})}}\right]^{2}}
And the lowest moment is
m
−
1
=
1
Γ
(
1
α
+
1
)
{\displaystyle m_{-1}={\frac {1}{\Gamma ({\frac {1}{\alpha }}+1)}}}
by applying
Γ
(
x
y
)
→
y
Γ
(
x
)
{\displaystyle \Gamma ({\frac {x}{y}})\to y\Gamma (x)}
when
x
→
0
{\displaystyle x\to 0}
.
The n -th moment of the stable vol distribution
V
α
(
s
)
{\displaystyle V_{\alpha }(s)}
is
m
n
(
V
α
)
=
2
−
n
2
π
Γ
(
n
+
1
α
)
Γ
(
n
+
1
2
)
Γ
(
1
α
)
,
n
≥
−
1.
{\displaystyle {\begin{aligned}m_{n}(V_{\alpha })&=2^{-{\frac {n}{2}}}{\sqrt {\pi }}\,{\frac {\Gamma ({\frac {n+1}{\alpha }})}{\Gamma ({\frac {n+1}{2}})\Gamma ({\frac {1}{\alpha }})}},\,n\geq -1.\end{aligned}}}
Moment generating function [ edit ]
The MGF can be expressed by a Fox-Wright function or Fox H-function :
M
α
(
s
)
=
∑
n
=
0
∞
m
n
s
n
n
!
=
1
Γ
(
1
α
)
∑
n
=
0
∞
Γ
(
n
+
1
α
)
s
n
Γ
(
n
+
1
)
2
=
1
Γ
(
1
α
)
1
Ψ
1
[
(
1
α
,
1
α
)
;
(
1
,
1
)
;
s
]
,
or
=
1
Γ
(
1
α
)
H
1
,
2
1
,
1
[
−
s
|
(
1
−
1
α
,
1
α
)
(
0
,
1
)
;
(
0
,
1
)
]
{\displaystyle {\begin{aligned}M_{\alpha }(s)&=\sum _{n=0}^{\infty }{\frac {m_{n}\,s^{n}}{n!}}={\frac {1}{\Gamma ({\frac {1}{\alpha }})}}\sum _{n=0}^{\infty }{\frac {\Gamma ({\frac {n+1}{\alpha }})\,s^{n}}{\Gamma (n+1)^{2}}}\\&={\frac {1}{\Gamma ({\frac {1}{\alpha }})}}{}_{1}\Psi _{1}\left[({\frac {1}{\alpha }},{\frac {1}{\alpha }});(1,1);s\right],\,\,{\text{or}}\\&={\frac {1}{\Gamma ({\frac {1}{\alpha }})}}H_{1,2}^{1,1}\left[-s{\bigl |}{\begin{matrix}(1-{\frac {1}{\alpha }},{\frac {1}{\alpha }})\\(0,1);(0,1)\end{matrix}}\right]\\\end{aligned}}}
As a verification, at
α
=
1
2
{\displaystyle \alpha ={\frac {1}{2}}}
,
M
1
2
(
s
)
=
(
1
−
4
s
)
−
3
2
{\displaystyle M_{\frac {1}{2}}(s)=(1-4s)^{-{\frac {3}{2}}}}
(see below) can be Taylor-expanded to
1
Ψ
1
[
(
2
,
2
)
;
(
1
,
1
)
;
s
]
=
∑
n
=
0
∞
Γ
(
2
n
+
2
)
s
n
Γ
(
n
+
1
)
2
{\displaystyle {}_{1}\Psi _{1}\left[(2,2);(1,1);s\right]=\sum _{n=0}^{\infty }{\frac {\Gamma (2n+2)\,s^{n}}{\Gamma (n+1)^{2}}}}
via
Γ
(
1
2
−
n
)
=
π
(
−
4
)
n
n
!
(
2
n
)
!
{\displaystyle \Gamma ({\frac {1}{2}}-n)={\sqrt {\pi }}{\frac {(-4)^{n}n!}{(2n)!}}}
.
Known analytical case – quartic stable count[ edit ]
When
α
=
1
2
{\displaystyle \alpha ={\frac {1}{2}}}
,
L
1
/
2
(
x
)
{\displaystyle L_{1/2}(x)}
is the Lévy distribution which is an inverse gamma distribution. Thus
N
1
/
2
(
ν
;
ν
0
,
θ
)
{\displaystyle {\mathfrak {N}}_{1/2}(\nu ;\nu _{0},\theta )}
is a shifted gamma distribution of shape 3/2 and scale
4
θ
{\displaystyle 4\theta }
,
N
1
2
(
ν
;
ν
0
,
θ
)
=
1
4
π
θ
3
/
2
(
ν
−
ν
0
)
1
/
2
e
−
(
ν
−
ν
0
)
/
4
θ
,
{\displaystyle {\mathfrak {N}}_{\frac {1}{2}}(\nu ;\nu _{0},\theta )={\frac {1}{4{\sqrt {\pi }}\theta ^{3/2}}}(\nu -\nu _{0})^{1/2}e^{-(\nu -\nu _{0})/4\theta },}
where
ν
>
ν
0
{\displaystyle \nu >\nu _{0}}
,
θ
>
0
{\displaystyle \theta >0}
.
Its mean is
ν
0
+
6
θ
{\displaystyle \nu _{0}+6\theta }
and its standard deviation is
24
θ
{\displaystyle {\sqrt {24}}\theta }
. This called "quartic stable count distribution". The word "quartic" comes from Lihn's former work on the lambda distribution[ 6] where
λ
=
2
/
α
=
4
{\displaystyle \lambda =2/\alpha =4}
. At this setting, many facets of stable count distribution have elegant analytical solutions.
The p -th central moments are
2
Γ
(
p
+
3
/
2
)
Γ
(
3
/
2
)
4
p
θ
p
{\displaystyle {\frac {2\Gamma (p+3/2)}{\Gamma (3/2)}}4^{p}\theta ^{p}}
. The CDF is
2
π
γ
(
3
2
,
ν
−
ν
0
4
θ
)
{\textstyle {\frac {2}{\sqrt {\pi }}}\gamma \left({\frac {3}{2}},{\frac {\nu -\nu _{0}}{4\theta }}\right)}
where
γ
(
s
,
x
)
{\displaystyle \gamma (s,x)}
is the lower incomplete gamma function . And the MGF is
M
1
2
(
s
)
=
e
s
ν
0
(
1
−
4
s
θ
)
−
3
2
{\displaystyle M_{\frac {1}{2}}(s)=e^{s\nu _{0}}(1-4s\theta )^{-{\frac {3}{2}}}}
. (See Section 3 of [ 1] )
Special case when α → 1[ edit ]
As
α
{\displaystyle \alpha }
becomes larger, the peak of the distribution becomes sharper. A special case of
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
is when
α
→
1
{\displaystyle \alpha \rightarrow 1}
. The distribution behaves like a Dirac delta function ,
N
α
→
1
(
ν
)
→
δ
(
ν
−
1
)
,
{\displaystyle {\mathfrak {N}}_{\alpha \to 1}(\nu )\to \delta (\nu -1),}
where
δ
(
x
)
=
{
∞
,
if
x
=
0
0
,
if
x
≠
0
{\displaystyle \delta (x)={\begin{cases}\infty ,&{\text{if }}x=0\\0,&{\text{if }}x\neq 0\end{cases}}}
, and
∫
0
−
0
+
δ
(
x
)
d
x
=
1
{\displaystyle \int _{0_{-}}^{0_{+}}\delta (x)dx=1}
.
Likewise, the stable vol distribution at
α
→
2
{\displaystyle \alpha \to 2}
also becomes a delta function,
V
α
→
2
(
s
)
→
δ
(
s
−
1
2
)
.
{\displaystyle V_{\alpha \to 2}(s)\to \delta (s-{\frac {1}{\sqrt {2}}}).}
Series representation [ edit ]
Based on the series representation of the one-sided stable distribution, we have:
N
α
(
x
)
=
1
π
Γ
(
1
α
+
1
)
∑
n
=
1
∞
−
sin
(
n
(
α
+
1
)
π
)
n
!
x
α
n
Γ
(
α
n
+
1
)
=
1
π
Γ
(
1
α
+
1
)
∑
n
=
1
∞
(
−
1
)
n
+
1
sin
(
n
α
π
)
n
!
x
α
n
Γ
(
α
n
+
1
)
{\displaystyle {\begin{aligned}{\mathfrak {N}}_{\alpha }(x)&={\frac {1}{\pi \Gamma ({\frac {1}{\alpha }}+1)}}\sum _{n=1}^{\infty }{\frac {-\sin(n(\alpha +1)\pi )}{n!}}{x}^{\alpha n}\Gamma (\alpha n+1)\\&={\frac {1}{\pi \Gamma ({\frac {1}{\alpha }}+1)}}\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}\sin(n\alpha \pi )}{n!}}{x}^{\alpha n}\Gamma (\alpha n+1)\\\end{aligned}}}
.
This series representation has two interpretations:
First, a similar form of this series was first given in Pollard (1948),[ 7] and in "Relation to Mittag-Leffler function ", it is stated that
N
α
(
x
)
=
α
2
x
α
Γ
(
1
α
)
H
α
(
x
α
)
,
{\displaystyle {\mathfrak {N}}_{\alpha }(x)={\frac {\alpha ^{2}x^{\alpha }}{\Gamma \left({\frac {1}{\alpha }}\right)}}H_{\alpha }(x^{\alpha }),}
where
H
α
(
k
)
{\displaystyle H_{\alpha }(k)}
is the Laplace transform of the Mittag-Leffler function
E
α
(
−
x
)
{\displaystyle E_{\alpha }(-x)}
.
Secondly, this series is a special case of the Wright function
W
λ
,
μ
(
z
)
{\displaystyle W_{\lambda ,\mu }(z)}
: (See Section 1.4 of [ 5] )
N
α
(
x
)
=
1
π
Γ
(
1
α
+
1
)
∑
n
=
1
∞
(
−
1
)
n
x
α
n
n
!
sin
(
(
α
n
+
1
)
π
)
Γ
(
α
n
+
1
)
=
1
Γ
(
1
α
+
1
)
W
−
α
,
0
(
−
x
α
)
,
where
W
λ
,
μ
(
z
)
=
∑
n
=
0
∞
z
n
n
!
Γ
(
λ
n
+
μ
)
,
λ
>
−
1.
{\displaystyle {\begin{aligned}{\mathfrak {N}}_{\alpha }(x)&={\frac {1}{\pi \Gamma ({\frac {1}{\alpha }}+1)}}\sum _{n=1}^{\infty }{\frac {(-1)^{n}{x}^{\alpha n}}{n!}}\,\sin((\alpha n+1)\pi )\Gamma (\alpha n+1)\\&={\frac {1}{\Gamma \left({\frac {1}{\alpha }}+1\right)}}W_{-\alpha ,0}(-x^{\alpha }),\,{\text{where}}\,\,W_{\lambda ,\mu }(z)=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!\,\Gamma (\lambda n+\mu )}},\lambda >-1.\\\end{aligned}}}
The proof is obtained by the reflection formula of the Gamma function:
sin
(
(
α
n
+
1
)
π
)
Γ
(
α
n
+
1
)
=
π
/
Γ
(
−
α
n
)
{\displaystyle \sin((\alpha n+1)\pi )\Gamma (\alpha n+1)=\pi /\Gamma (-\alpha n)}
, which admits the mapping:
λ
=
−
α
,
μ
=
0
,
z
=
−
x
α
{\displaystyle \lambda =-\alpha ,\mu =0,z=-x^{\alpha }}
in
W
λ
,
μ
(
z
)
{\displaystyle W_{\lambda ,\mu }(z)}
. The Wright representation leads to analytical solutions for many statistical properties of the stable count distribution and establish another connection to fractional calculus.
Stable count distribution can represent the daily distribution of VIX quite well. It is hypothesized that VIX is distributed like
N
1
2
(
ν
;
ν
0
,
θ
)
{\displaystyle {\mathfrak {N}}_{\frac {1}{2}}(\nu ;\nu _{0},\theta )}
with
ν
0
=
10.4
{\displaystyle \nu _{0}=10.4}
and
θ
=
1.6
{\displaystyle \theta =1.6}
(See Section 7 of [ 1] ). Thus the stable count distribution is the first-order marginal distribution of a volatility process. In this context,
ν
0
{\displaystyle \nu _{0}}
is called the "floor volatility". In practice, VIX rarely drops below 10. This phenomenon justifies the concept of "floor volatility". A sample of the fit is shown below:
VIX daily distribution and fit to stable count
One form of mean-reverting SDE for
N
1
2
(
ν
;
ν
0
,
θ
)
{\displaystyle {\mathfrak {N}}_{\frac {1}{2}}(\nu ;\nu _{0},\theta )}
is based on a modified Cox–Ingersoll–Ross (CIR) model . Assume
S
t
{\displaystyle S_{t}}
is the volatility process, we have
d
S
t
=
σ
2
8
θ
(
6
θ
+
ν
0
−
S
t
)
d
t
+
σ
S
t
−
ν
0
d
W
,
{\displaystyle dS_{t}={\frac {\sigma ^{2}}{8\theta }}(6\theta +\nu _{0}-S_{t})\,dt+\sigma {\sqrt {S_{t}-\nu _{0}}}\,dW,}
where
σ
{\displaystyle \sigma }
is the so-called "vol of vol". The "vol of vol" for VIX is called VVIX , which has a typical value of about 85.[ 8]
This SDE is analytically tractable and satisfies the Feller condition , thus
S
t
{\displaystyle S_{t}}
would never go below
ν
0
{\displaystyle \nu _{0}}
. But there is a subtle issue between theory and practice. There has been about 0.6% probability that VIX did go below
ν
0
{\displaystyle \nu _{0}}
. This is called "spillover". To address it, one can replace the square root term with
max
(
S
t
−
ν
0
,
δ
ν
0
)
{\displaystyle {\sqrt {\max(S_{t}-\nu _{0},\delta \nu _{0})}}}
, where
δ
ν
0
≈
0.01
ν
0
{\displaystyle \delta \nu _{0}\approx 0.01\,\nu _{0}}
provides a small leakage channel for
S
t
{\displaystyle S_{t}}
to drift slightly below
ν
0
{\displaystyle \nu _{0}}
.
Extremely low VIX reading indicates a very complacent market. Thus the spillover condition,
S
t
<
ν
0
{\displaystyle S_{t}<\nu _{0}}
, carries a certain significance - When it occurs, it usually indicates the calm before the storm in the business cycle.
Generation of Random Variables [ edit ]
As the modified CIR model above shows, it takes another input parameter
σ
{\displaystyle \sigma }
to simulate sequences of stable count random variables. The mean-reverting stochastic process takes the form of
d
S
t
=
σ
2
μ
α
(
S
t
θ
)
d
t
+
σ
S
t
d
W
,
{\displaystyle dS_{t}=\sigma ^{2}\mu _{\alpha }\left({\frac {S_{t}}{\theta }}\right)\,dt+\sigma {\sqrt {S_{t}}}\,dW,}
which should produce
{
S
t
}
{\displaystyle \{S_{t}\}}
that distributes like
N
α
(
ν
;
θ
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu ;\theta )}
as
t
→
∞
{\displaystyle t\rightarrow \infty }
.
And
σ
{\displaystyle \sigma }
is a user-specified preference for how fast
S
t
{\displaystyle S_{t}}
should change.
By solving the Fokker-Planck equation , the solution for
μ
α
(
x
)
{\displaystyle \mu _{\alpha }(x)}
in terms of
N
α
(
x
)
{\displaystyle {\mathfrak {N}}_{\alpha }(x)}
is
μ
α
(
x
)
=
1
2
(
x
d
d
x
+
1
)
N
α
(
x
)
N
α
(
x
)
=
1
2
[
x
d
d
x
(
log
N
α
(
x
)
)
+
1
]
{\displaystyle {\begin{array}{lcl}\mu _{\alpha }(x)&=&\displaystyle {\frac {1}{2}}{\frac {\left(x{d \over dx}+1\right){\mathfrak {N}}_{\alpha }(x)}{{\mathfrak {N}}_{\alpha }(x)}}\\&=&\displaystyle {\frac {1}{2}}\left[x{d \over dx}\left(\log {\mathfrak {N}}_{\alpha }(x)\right)+1\right]\end{array}}}
It can also be written as a ratio of two Wright functions,
μ
α
(
x
)
=
−
1
2
W
−
α
,
−
1
(
−
x
α
)
Γ
(
1
α
+
1
)
N
α
(
x
)
=
−
1
2
W
−
α
,
−
1
(
−
x
α
)
W
−
α
,
0
(
−
x
α
)
{\displaystyle {\begin{array}{lcl}\mu _{\alpha }(x)&=&\displaystyle -{\frac {1}{2}}{\frac {W_{-\alpha ,-1}(-x^{\alpha })}{\Gamma ({\frac {1}{\alpha }}+1)\,{\mathfrak {N}}_{\alpha }(x)}}\\&=&\displaystyle -{\frac {1}{2}}{\frac {W_{-\alpha ,-1}(-x^{\alpha })}{W_{-\alpha ,0}(-x^{\alpha })}}\end{array}}}
When
α
=
1
/
2
{\displaystyle \alpha =1/2}
, this process is reduced to the modified CIR model where
μ
1
/
2
(
x
)
=
1
8
(
6
−
x
)
{\displaystyle \mu _{1/2}(x)={\frac {1}{8}}(6-x)}
.
This is the only special case where
μ
α
(
x
)
{\displaystyle \mu _{\alpha }(x)}
is a straight line.
Likewise, if the asymptotic distribution is
V
α
(
s
)
{\displaystyle V_{\alpha }(s)}
as
t
→
∞
{\displaystyle t\rightarrow \infty }
,
the
μ
α
(
x
)
{\displaystyle \mu _{\alpha }(x)}
solution, denoted as
μ
(
x
;
V
α
)
{\displaystyle \mu (x;V_{\alpha })}
below, is
μ
(
x
;
V
α
)
=
−
W
−
α
2
,
−
1
(
−
(
2
x
)
α
)
W
−
α
2
,
0
(
−
(
2
x
)
α
)
−
1
2
{\displaystyle {\begin{array}{lcl}\mu (x;V_{\alpha })&=&\displaystyle -{\frac {W_{-{\frac {\alpha }{2}},-1}(-{({\sqrt {2}}x)}^{\alpha })}{W_{-{\frac {\alpha }{2}},0}(-{({\sqrt {2}}x)}^{\alpha })}}-{\frac {1}{2}}\end{array}}}
When
α
=
1
{\displaystyle \alpha =1}
, it is reduced to a quadratic polynomial:
μ
(
x
;
V
1
)
=
1
−
x
2
2
{\displaystyle \mu (x;V_{1})=1-{\frac {x^{2}}{2}}}
.
Stable Extension of the CIR Model [ edit ]
By relaxing the rigid relation between the
σ
2
{\displaystyle \sigma ^{2}}
term and the
σ
{\displaystyle \sigma }
term above,
the stable extension of the CIR model can be constructed as
d
r
t
=
a
[
8
b
6
μ
α
(
6
b
r
t
)
]
d
t
+
σ
r
t
d
W
,
{\displaystyle dr_{t}=a\,\left[{\frac {8b}{6}}\,\mu _{\alpha }\left({\frac {6}{b}}r_{t}\right)\right]\,dt+\sigma {\sqrt {r_{t}}}\,dW,}
which is reduced to the original CIR model
at
α
=
1
/
2
{\displaystyle \alpha =1/2}
:
d
r
t
=
a
(
b
−
r
t
)
d
t
+
σ
r
t
d
W
{\displaystyle dr_{t}=a\left(b-r_{t}\right)dt+\sigma {\sqrt {r_{t}}}\,dW}
.
Hence, the parameter
a
{\displaystyle a}
controls the mean-reverting speed,
the location parameter
b
{\displaystyle b}
sets where the mean is,
σ
{\displaystyle \sigma }
is the volatility parameter,
and
α
{\displaystyle \alpha }
is the shape parameter for the stable law.
By solving the Fokker-Planck equation , the solution for the PDF
p
(
x
)
{\displaystyle p(x)}
at
r
∞
{\displaystyle r_{\infty }}
is
p
(
x
)
∝
exp
[
∫
x
d
x
x
(
2
D
μ
α
(
6
b
x
)
−
1
)
]
,
where
D
=
4
a
b
3
σ
2
=
N
α
(
6
b
x
)
D
x
D
−
1
{\displaystyle {\begin{array}{lcl}p(x)&\propto &\displaystyle \exp \left[\int ^{x}{\frac {dx}{x}}\left(2D\,\mu _{\alpha }\left({\frac {6}{b}}x\right)-1\right)\right],{\text{ where }}D={\frac {4ab}{3\sigma ^{2}}}\\&=&\displaystyle {\mathfrak {N}}_{\alpha }\left({\frac {6}{b}}x\right)^{D}\,x^{D-1}\end{array}}}
To make sense of this solution, consider asymptotically for large
x
{\displaystyle x}
,
p
(
x
)
{\displaystyle p(x)}
's tail is still in the form of a generalized gamma distribution , where in its
f
(
x
;
a
′
,
d
,
p
)
{\displaystyle f(x;a',d,p)}
parametrization,
p
=
α
1
−
α
{\displaystyle p={\frac {\alpha }{1-\alpha }}}
,
a
′
=
b
6
(
D
A
(
α
)
)
−
1
/
p
{\displaystyle a'={\frac {b}{6}}(D\,A(\alpha ))^{-1/p}}
, and
d
=
D
(
1
+
p
2
)
{\displaystyle d=D\left(1+{\frac {p}{2}}\right)}
.
It is reduced to the original CIR model
at
α
=
1
/
2
{\displaystyle \alpha =1/2}
where
p
(
x
)
∝
x
d
−
1
e
−
x
/
a
′
{\displaystyle p(x)\propto x^{d-1}e^{-x/a'}}
with
d
=
2
a
b
σ
2
{\displaystyle d={\frac {2ab}{\sigma ^{2}}}}
and
A
(
α
)
=
1
4
{\displaystyle A(\alpha )={\frac {1}{4}}}
; hence
1
a
′
=
6
b
(
D
4
)
=
2
a
σ
2
{\displaystyle {\frac {1}{a'}}={\frac {6}{b}}\left({\frac {D}{4}}\right)={\frac {2a}{\sigma ^{2}}}}
.
Fractional calculus [ edit ]
Relation to Mittag-Leffler function [ edit ]
From Section 4 of,[ 9] the inverse Laplace transform
H
α
(
k
)
{\displaystyle H_{\alpha }(k)}
of the Mittag-Leffler function
E
α
(
−
x
)
{\displaystyle E_{\alpha }(-x)}
is (
k
>
0
{\displaystyle k>0}
)
H
α
(
k
)
=
L
−
1
{
E
α
(
−
x
)
}
(
k
)
=
2
π
∫
0
∞
E
2
α
(
−
t
2
)
cos
(
k
t
)
d
t
.
{\displaystyle H_{\alpha }(k)={\mathcal {L}}^{-1}\{E_{\alpha }(-x)\}(k)={\frac {2}{\pi }}\int _{0}^{\infty }E_{2\alpha }(-t^{2})\cos(kt)\,dt.}
On the other hand, the following relation was given by Pollard (1948),[ 7]
H
α
(
k
)
=
1
α
1
k
1
+
1
/
α
L
α
(
1
k
1
/
α
)
.
{\displaystyle H_{\alpha }(k)={\frac {1}{\alpha }}{\frac {1}{k^{1+1/\alpha }}}L_{\alpha }\left({\frac {1}{k^{1/\alpha }}}\right).}
Thus by
k
=
ν
α
{\displaystyle k=\nu ^{\alpha }}
, we obtain the relation between stable count distribution and Mittag-Leffter function:
N
α
(
ν
)
=
α
2
ν
α
Γ
(
1
α
)
H
α
(
ν
α
)
.
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )={\frac {\alpha ^{2}\nu ^{\alpha }}{\Gamma \left({\frac {1}{\alpha }}\right)}}H_{\alpha }(\nu ^{\alpha }).}
This relation can be verified quickly at
α
=
1
2
{\displaystyle \alpha ={\frac {1}{2}}}
where
H
1
2
(
k
)
=
1
π
e
−
k
2
/
4
{\displaystyle H_{\frac {1}{2}}(k)={\frac {1}{\sqrt {\pi }}}\,e^{-k^{2}/4}}
and
k
2
=
ν
{\displaystyle k^{2}=\nu }
. This leads to the well-known quartic stable count result:
N
1
2
(
ν
)
=
ν
1
/
2
4
Γ
(
2
)
×
1
π
e
−
ν
/
4
=
1
4
π
ν
1
/
2
e
−
ν
/
4
.
{\displaystyle {\mathfrak {N}}_{\frac {1}{2}}(\nu )={\frac {\nu ^{1/2}}{4\,\Gamma (2)}}\times {\frac {1}{\sqrt {\pi }}}\,e^{-\nu /4}={\frac {1}{4\,{\sqrt {\pi }}}}\nu ^{1/2}\,e^{-\nu /4}.}
Relation to time-fractional Fokker-Planck equation [ edit ]
The ordinary Fokker-Planck equation (FPE) is
∂
P
1
(
x
,
t
)
∂
t
=
K
1
L
~
F
P
P
1
(
x
,
t
)
{\displaystyle {\frac {\partial P_{1}(x,t)}{\partial t}}=K_{1}\,{\tilde {L}}_{FP}P_{1}(x,t)}
, where
L
~
F
P
=
∂
∂
x
F
(
x
)
T
+
∂
2
∂
x
2
{\displaystyle {\tilde {L}}_{FP}={\frac {\partial }{\partial x}}{\frac {F(x)}{T}}+{\frac {\partial ^{2}}{\partial x^{2}}}}
is the Fokker-Planck space operator,
K
1
{\displaystyle K_{1}}
is the diffusion coefficient ,
T
{\displaystyle T}
is the temperature, and
F
(
x
)
{\displaystyle F(x)}
is the external field. The time-fractional FPE introduces the additional fractional derivative
0
D
t
1
−
α
{\displaystyle \,_{0}D_{t}^{1-\alpha }}
such that
∂
P
α
(
x
,
t
)
∂
t
=
K
α
0
D
t
1
−
α
L
~
F
P
P
α
(
x
,
t
)
{\displaystyle {\frac {\partial P_{\alpha }(x,t)}{\partial t}}=K_{\alpha }\,_{0}D_{t}^{1-\alpha }{\tilde {L}}_{FP}P_{\alpha }(x,t)}
, where
K
α
{\displaystyle K_{\alpha }}
is the fractional diffusion coefficient.
Let
k
=
s
/
t
α
{\displaystyle k=s/t^{\alpha }}
in
H
α
(
k
)
{\displaystyle H_{\alpha }(k)}
, we obtain the kernel for the time-fractional FPE (Eq (16) of [ 10] )
n
(
s
,
t
)
=
1
α
t
s
1
+
1
/
α
L
α
(
t
s
1
/
α
)
{\displaystyle n(s,t)={\frac {1}{\alpha }}{\frac {t}{s^{1+1/\alpha }}}L_{\alpha }\left({\frac {t}{s^{1/\alpha }}}\right)}
from which the fractional density
P
α
(
x
,
t
)
{\displaystyle P_{\alpha }(x,t)}
can be calculated from an ordinary solution
P
1
(
x
,
t
)
{\displaystyle P_{1}(x,t)}
via
P
α
(
x
,
t
)
=
∫
0
∞
n
(
s
K
,
t
)
P
1
(
x
,
s
)
d
s
,
where
K
=
K
α
K
1
.
{\displaystyle P_{\alpha }(x,t)=\int _{0}^{\infty }n\left({\frac {s}{K}},t\right)\,P_{1}(x,s)\,ds,{\text{ where }}K={\frac {K_{\alpha }}{K_{1}}}.}
Since
n
(
s
K
,
t
)
d
s
=
Γ
(
1
α
+
1
)
1
ν
N
α
(
ν
;
θ
=
K
1
/
α
)
d
ν
{\displaystyle n({\frac {s}{K}},t)\,ds=\Gamma \left({\frac {1}{\alpha }}+1\right){\frac {1}{\nu }}\,{\mathfrak {N}}_{\alpha }(\nu ;\theta =K^{1/\alpha })\,d\nu }
via change of variable
ν
t
=
s
1
/
α
{\displaystyle \nu t=s^{1/\alpha }}
, the above integral becomes the product distribution with
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
, similar to the "lambda decomposition " concept, and scaling of time
t
⇒
(
ν
t
)
α
{\displaystyle t\Rightarrow (\nu t)^{\alpha }}
:
P
α
(
x
,
t
)
=
Γ
(
1
α
+
1
)
∫
0
∞
1
ν
N
α
(
ν
;
θ
=
K
1
/
α
)
P
1
(
x
,
(
ν
t
)
α
)
d
ν
.
{\displaystyle P_{\alpha }(x,t)=\Gamma \left({\frac {1}{\alpha }}+1\right)\int _{0}^{\infty }{\frac {1}{\nu }}\,{\mathfrak {N}}_{\alpha }(\nu ;\theta =K^{1/\alpha })\,P_{1}(x,(\nu t)^{\alpha })\,d\nu .}
Here
N
α
(
ν
;
θ
=
K
1
/
α
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu ;\theta =K^{1/\alpha })}
is interpreted as the distribution of impurity, expressed in the unit of
K
1
/
α
{\displaystyle K^{1/\alpha }}
, that causes the anomalous diffusion .
^ a b c d e f g Lihn, Stephen (2017). "A Theory of Asset Return and Volatility Under Stable Law and Stable Lambda Distribution". SSRN 3046732 .
^ Paul Lévy, Calcul des probabilités 1925
^ a b Penson, K. A.; Górska, K. (2010-11-17). "Exact and Explicit Probability Densities for One-Sided Lévy Stable Distributions". Physical Review Letters . 105 (21): 210604. arXiv :1007.0193 . Bibcode :2010PhRvL.105u0604P . doi :10.1103/PhysRevLett.105.210604 . PMID 21231282 . S2CID 27497684 .
^ a b Lihn, Stephen (2020). "Stable Count Distribution for the Volatility Indices and Space-Time Generalized Stable Characteristic Function". SSRN 3659383 .
^ a b c Mathai, A.M.; Haubold, H.J. (2017). Fractional and Multivariable Calculus . Springer Optimization and Its Applications. Vol. 122. Cham: Springer International Publishing. doi :10.1007/978-3-319-59993-9 . ISBN 9783319599922 .
^ Lihn, Stephen H. T. (2017-01-26). "From Volatility Smile to Risk Neutral Probability and Closed Form Solution of Local Volatility Function". SSRN 2906522 .
^ a b Pollard, Harry (1948-12-01). "The completely monotonic character of the Mittag-Leffler function E a (−x ) " . Bulletin of the American Mathematical Society . 54 (12): 1115–1117. doi :10.1090/S0002-9904-1948-09132-7 . ISSN 0002-9904 .
^ "DOUBLE THE FUN WITH CBOE's VVIX Index" (PDF) . www.cboe.com . Retrieved 2019-08-09 .
^ Saxena, R. K.; Mathai, A. M.; Haubold, H. J. (2009-09-01). "Mittag-Leffler Functions and Their Applications". arXiv :0909.0230 [math.CA ].
^ Barkai, E. (2001-03-29). "Fractional Fokker-Planck equation, solution, and application". Physical Review E . 63 (4): 046118. Bibcode :2001PhRvE..63d6118B . doi :10.1103/PhysRevE.63.046118 . ISSN 1063-651X . PMID 11308923 . S2CID 18112355 .
R Package 'stabledist' by Diethelm Wuertz, Martin Maechler and Rmetrics core team members. Computes stable density, probability, quantiles, and random numbers. Updated Sept. 12, 2016.
Discrete univariate
with finite support with infinite support
Continuous univariate
supported on a bounded interval supported on a semi-infinite interval supported on the whole real line with support whose type varies
Mixed univariate
Multivariate (joint) Directional Degenerate and singular Families