Statistical distribution of complex random variables
In probability theory , the family of complex normal distributions , denoted
C
N
{\displaystyle {\mathcal {CN}}}
or
N
C
{\displaystyle {\mathcal {N}}_{\mathcal {C}}}
, characterizes complex random variables whose real and imaginary parts are jointly normal .[ 1] The complex normal family has three parameters: location parameter μ , covariance matrix
Γ
{\displaystyle \Gamma }
, and the relation matrix
C
{\displaystyle C}
. The standard complex normal is the univariate distribution with
μ
=
0
{\displaystyle \mu =0}
,
Γ
=
1
{\displaystyle \Gamma =1}
, and
C
=
0
{\displaystyle C=0}
.
An important subclass of complex normal family is called the circularly-symmetric (central) complex normal and corresponds to the case of zero relation matrix and zero mean:
μ
=
0
{\displaystyle \mu =0}
and
C
=
0
{\displaystyle C=0}
.[ 2] This case is used extensively in signal processing , where it is sometimes referred to as just complex normal in the literature.
Complex standard normal random variable [ edit ]
The standard complex normal random variable or standard complex Gaussian random variable is a complex random variable
Z
{\displaystyle Z}
whose real and imaginary parts are independent normally distributed random variables with mean zero and variance
1
/
2
{\displaystyle 1/2}
.[ 3] : p. 494 [ 4] : pp. 501 Formally,
Z
∼
C
N
(
0
,
1
)
⟺
ℜ
(
Z
)
⊥
⊥
ℑ
(
Z
)
and
ℜ
(
Z
)
∼
N
(
0
,
1
/
2
)
and
ℑ
(
Z
)
∼
N
(
0
,
1
/
2
)
{\displaystyle Z\sim {\mathcal {CN}}(0,1)\quad \iff \quad \Re (Z)\perp \!\!\!\perp \Im (Z){\text{ and }}\Re (Z)\sim {\mathcal {N}}(0,1/2){\text{ and }}\Im (Z)\sim {\mathcal {N}}(0,1/2)}
(Eq.1 )
where
Z
∼
C
N
(
0
,
1
)
{\displaystyle Z\sim {\mathcal {CN}}(0,1)}
denotes that
Z
{\displaystyle Z}
is a standard complex normal random variable.
Complex normal random variable [ edit ]
Suppose
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are real random variables such that
(
X
,
Y
)
T
{\displaystyle (X,Y)^{\mathrm {T} }}
is a 2-dimensional normal random vector . Then the complex random variable
Z
=
X
+
i
Y
{\displaystyle Z=X+iY}
is called complex normal random variable or complex Gaussian random variable .[ 3] : p. 500
Z
complex normal random variable
⟺
(
ℜ
(
Z
)
,
ℑ
(
Z
)
)
T
real normal random vector
{\displaystyle Z{\text{ complex normal random variable}}\quad \iff \quad (\Re (Z),\Im (Z))^{\mathrm {T} }{\text{ real normal random vector}}}
(Eq.2 )
Complex standard normal random vector [ edit ]
A n-dimensional complex random vector
Z
=
(
Z
1
,
…
,
Z
n
)
T
{\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{\mathrm {T} }}
is a complex standard normal random vector or complex standard Gaussian random vector if its components are independent and all of them are standard complex normal random variables as defined above.[ 3] : p. 502 [ 4] : pp. 501
That
Z
{\displaystyle \mathbf {Z} }
is a standard complex normal random vector is denoted
Z
∼
C
N
(
0
,
I
n
)
{\displaystyle \mathbf {Z} \sim {\mathcal {CN}}(0,{\boldsymbol {I}}_{n})}
.
Z
∼
C
N
(
0
,
I
n
)
⟺
(
Z
1
,
…
,
Z
n
)
independent
and for
1
≤
i
≤
n
:
Z
i
∼
C
N
(
0
,
1
)
{\displaystyle \mathbf {Z} \sim {\mathcal {CN}}(0,{\boldsymbol {I}}_{n})\quad \iff (Z_{1},\ldots ,Z_{n}){\text{ independent}}{\text{ and for }}1\leq i\leq n:Z_{i}\sim {\mathcal {CN}}(0,1)}
(Eq.3 )
Complex normal random vector [ edit ]
If
X
=
(
X
1
,
…
,
X
n
)
T
{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\mathrm {T} }}
and
Y
=
(
Y
1
,
…
,
Y
n
)
T
{\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{\mathrm {T} }}
are random vectors in
R
n
{\displaystyle \mathbb {R} ^{n}}
such that
[
X
,
Y
]
{\displaystyle [\mathbf {X} ,\mathbf {Y} ]}
is a normal random vector with
2
n
{\displaystyle 2n}
components. Then we say that the complex random vector
Z
=
X
+
i
Y
{\displaystyle \mathbf {Z} =\mathbf {X} +i\mathbf {Y} \,}
is a complex normal random vector or a complex Gaussian random vector .
Z
complex normal random vector
⟺
(
ℜ
(
Z
T
)
,
ℑ
(
Z
T
)
)
T
=
(
ℜ
(
Z
1
)
,
…
,
ℜ
(
Z
n
)
,
ℑ
(
Z
1
)
,
…
,
ℑ
(
Z
n
)
)
T
real normal random vector
{\displaystyle \mathbf {Z} {\text{ complex normal random vector}}\quad \iff \quad (\Re (\mathbf {Z} ^{\mathrm {T} }),\Im (\mathbf {Z} ^{\mathrm {T} }))^{\mathrm {T} }=(\Re (Z_{1}),\ldots ,\Re (Z_{n}),\Im (Z_{1}),\ldots ,\Im (Z_{n}))^{\mathrm {T} }{\text{ real normal random vector}}}
(Eq.4 )
Mean, covariance, and relation[ edit ]
The complex Gaussian distribution can be described with 3 parameters:[ 5]
μ
=
E
[
Z
]
,
Γ
=
E
[
(
Z
−
μ
)
(
Z
−
μ
)
H
]
,
C
=
E
[
(
Z
−
μ
)
(
Z
−
μ
)
T
]
,
{\displaystyle \mu =\operatorname {E} [\mathbf {Z} ],\quad \Gamma =\operatorname {E} [(\mathbf {Z} -\mu )({\mathbf {Z} }-\mu )^{\mathrm {H} }],\quad C=\operatorname {E} [(\mathbf {Z} -\mu )(\mathbf {Z} -\mu )^{\mathrm {T} }],}
where
Z
T
{\displaystyle \mathbf {Z} ^{\mathrm {T} }}
denotes matrix transpose of
Z
{\displaystyle \mathbf {Z} }
, and
Z
H
{\displaystyle \mathbf {Z} ^{\mathrm {H} }}
denotes conjugate transpose .[ 3] : p. 504 [ 4] : pp. 500
Here the location parameter
μ
{\displaystyle \mu }
is a n-dimensional complex vector; the covariance matrix
Γ
{\displaystyle \Gamma }
is Hermitian and non-negative definite ; and, the relation matrix or pseudo-covariance matrix
C
{\displaystyle C}
is symmetric . The complex normal random vector
Z
{\displaystyle \mathbf {Z} }
can now be denoted as
Z
∼
C
N
(
μ
,
Γ
,
C
)
.
{\displaystyle \mathbf {Z} \ \sim \ {\mathcal {CN}}(\mu ,\ \Gamma ,\ C).}
Moreover, matrices
Γ
{\displaystyle \Gamma }
and
C
{\displaystyle C}
are such that the matrix
P
=
Γ
¯
−
C
H
Γ
−
1
C
{\displaystyle P={\overline {\Gamma }}-{C}^{\mathrm {H} }\Gamma ^{-1}C}
is also non-negative definite where
Γ
¯
{\displaystyle {\overline {\Gamma }}}
denotes the complex conjugate of
Γ
{\displaystyle \Gamma }
.[ 5]
Relationships between covariance matrices [ edit ]
As for any complex random vector, the matrices
Γ
{\displaystyle \Gamma }
and
C
{\displaystyle C}
can be related to the covariance matrices of
X
=
ℜ
(
Z
)
{\displaystyle \mathbf {X} =\Re (\mathbf {Z} )}
and
Y
=
ℑ
(
Z
)
{\displaystyle \mathbf {Y} =\Im (\mathbf {Z} )}
via expressions
V
X
X
≡
E
[
(
X
−
μ
X
)
(
X
−
μ
X
)
T
]
=
1
2
Re
[
Γ
+
C
]
,
V
X
Y
≡
E
[
(
X
−
μ
X
)
(
Y
−
μ
Y
)
T
]
=
1
2
Im
[
−
Γ
+
C
]
,
V
Y
X
≡
E
[
(
Y
−
μ
Y
)
(
X
−
μ
X
)
T
]
=
1
2
Im
[
Γ
+
C
]
,
V
Y
Y
≡
E
[
(
Y
−
μ
Y
)
(
Y
−
μ
Y
)
T
]
=
1
2
Re
[
Γ
−
C
]
,
{\displaystyle {\begin{aligned}&V_{XX}\equiv \operatorname {E} [(\mathbf {X} -\mu _{X})(\mathbf {X} -\mu _{X})^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Re} [\Gamma +C],\quad V_{XY}\equiv \operatorname {E} [(\mathbf {X} -\mu _{X})(\mathbf {Y} -\mu _{Y})^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Im} [-\Gamma +C],\\&V_{YX}\equiv \operatorname {E} [(\mathbf {Y} -\mu _{Y})(\mathbf {X} -\mu _{X})^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Im} [\Gamma +C],\quad \,V_{YY}\equiv \operatorname {E} [(\mathbf {Y} -\mu _{Y})(\mathbf {Y} -\mu _{Y})^{\mathrm {T} }]={\tfrac {1}{2}}\operatorname {Re} [\Gamma -C],\end{aligned}}}
and conversely
Γ
=
V
X
X
+
V
Y
Y
+
i
(
V
Y
X
−
V
X
Y
)
,
C
=
V
X
X
−
V
Y
Y
+
i
(
V
Y
X
+
V
X
Y
)
.
{\displaystyle {\begin{aligned}&\Gamma =V_{XX}+V_{YY}+i(V_{YX}-V_{XY}),\\&C=V_{XX}-V_{YY}+i(V_{YX}+V_{XY}).\end{aligned}}}
The probability density function for complex normal distribution can be computed as
f
(
z
)
=
1
π
n
det
(
Γ
)
det
(
P
)
exp
{
−
1
2
(
(
z
¯
−
μ
¯
)
⊺
,
(
z
−
μ
)
⊺
)
(
Γ
C
C
¯
Γ
¯
)
−
1
(
z
−
μ
z
¯
−
μ
¯
)
}
=
det
(
P
−
1
¯
−
R
∗
P
−
1
R
)
det
(
P
−
1
)
π
n
e
−
(
z
−
μ
)
∗
P
−
1
¯
(
z
−
μ
)
+
Re
(
(
z
−
μ
)
⊺
R
⊺
P
−
1
¯
(
z
−
μ
)
)
,
{\displaystyle {\begin{aligned}f(z)&={\frac {1}{\pi ^{n}{\sqrt {\det(\Gamma )\det(P)}}}}\,\exp \!\left\{-{\frac {1}{2}}{\begin{pmatrix}({\overline {z}}-{\overline {\mu }})^{\intercal },&(z-\mu )^{\intercal }\end{pmatrix}}{\begin{pmatrix}\Gamma &C\\{\overline {C}}&{\overline {\Gamma }}\end{pmatrix}}^{\!\!-1}\!{\begin{pmatrix}z-\mu \\{\overline {z}}-{\overline {\mu }}\end{pmatrix}}\right\}\\[8pt]&={\tfrac {\sqrt {\det \left({\overline {P^{-1}}}-R^{\ast }P^{-1}R\right)\det(P^{-1})}}{\pi ^{n}}}\,e^{-(z-\mu )^{\ast }{\overline {P^{-1}}}(z-\mu )+\operatorname {Re} \left((z-\mu )^{\intercal }R^{\intercal }{\overline {P^{-1}}}(z-\mu )\right)},\end{aligned}}}
where
R
=
C
H
Γ
−
1
{\displaystyle R=C^{\mathrm {H} }\Gamma ^{-1}}
and
P
=
Γ
¯
−
R
C
{\displaystyle P={\overline {\Gamma }}-RC}
.
Characteristic function [ edit ]
The characteristic function of complex normal distribution is given by[ 5]
φ
(
w
)
=
exp
{
i
Re
(
w
¯
′
μ
)
−
1
4
(
w
¯
′
Γ
w
+
Re
(
w
¯
′
C
w
¯
)
)
}
,
{\displaystyle \varphi (w)=\exp \!{\big \{}i\operatorname {Re} ({\overline {w}}'\mu )-{\tfrac {1}{4}}{\big (}{\overline {w}}'\Gamma w+\operatorname {Re} ({\overline {w}}'C{\overline {w}}){\big )}{\big \}},}
where the argument
w
{\displaystyle w}
is an n -dimensional complex vector.
If
Z
{\displaystyle \mathbf {Z} }
is a complex normal n -vector,
A
{\displaystyle {\boldsymbol {A}}}
an m×n matrix, and
b
{\displaystyle b}
a constant m -vector, then the linear transform
A
Z
+
b
{\displaystyle {\boldsymbol {A}}\mathbf {Z} +b}
will be distributed also complex-normally:
Z
∼
C
N
(
μ
,
Γ
,
C
)
⇒
A
Z
+
b
∼
C
N
(
A
μ
+
b
,
A
Γ
A
H
,
A
C
A
T
)
{\displaystyle Z\ \sim \ {\mathcal {CN}}(\mu ,\,\Gamma ,\,C)\quad \Rightarrow \quad AZ+b\ \sim \ {\mathcal {CN}}(A\mu +b,\,A\Gamma A^{\mathrm {H} },\,ACA^{\mathrm {T} })}
If
Z
{\displaystyle \mathbf {Z} }
is a complex normal n -vector, then
2
[
(
Z
−
μ
)
H
P
−
1
¯
(
Z
−
μ
)
−
Re
(
(
Z
−
μ
)
T
R
T
P
−
1
¯
(
Z
−
μ
)
)
]
∼
χ
2
(
2
n
)
{\displaystyle 2{\Big [}(\mathbf {Z} -\mu )^{\mathrm {H} }{\overline {P^{-1}}}(\mathbf {Z} -\mu )-\operatorname {Re} {\big (}(\mathbf {Z} -\mu )^{\mathrm {T} }R^{\mathrm {T} }{\overline {P^{-1}}}(\mathbf {Z} -\mu ){\big )}{\Big ]}\ \sim \ \chi ^{2}(2n)}
Central limit theorem . If
Z
1
,
…
,
Z
T
{\displaystyle Z_{1},\ldots ,Z_{T}}
are independent and identically distributed complex random variables, then
T
(
1
T
∑
t
=
1
T
Z
t
−
E
[
Z
t
]
)
→
d
C
N
(
0
,
Γ
,
C
)
,
{\displaystyle {\sqrt {T}}{\Big (}{\tfrac {1}{T}}\textstyle \sum _{t=1}^{T}Z_{t}-\operatorname {E} [Z_{t}]{\Big )}\ {\xrightarrow {d}}\ {\mathcal {CN}}(0,\,\Gamma ,\,C),}
where
Γ
=
E
[
Z
Z
H
]
{\displaystyle \Gamma =\operatorname {E} [ZZ^{\mathrm {H} }]}
and
C
=
E
[
Z
Z
T
]
{\displaystyle C=\operatorname {E} [ZZ^{\mathrm {T} }]}
.
Circularly-symmetric central case [ edit ]
A complex random vector
Z
{\displaystyle \mathbf {Z} }
is called circularly symmetric if for every deterministic
φ
∈
[
−
π
,
π
)
{\displaystyle \varphi \in [-\pi ,\pi )}
the distribution of
e
i
φ
Z
{\displaystyle e^{\mathrm {i} \varphi }\mathbf {Z} }
equals the distribution of
Z
{\displaystyle \mathbf {Z} }
.[ 4] : pp. 500–501
Central normal complex random vectors that are circularly symmetric are of particular interest because they are fully specified by the covariance matrix
Γ
{\displaystyle \Gamma }
.
The circularly-symmetric (central) complex normal distribution corresponds to the case of zero mean and zero relation matrix, i.e.
μ
=
0
{\displaystyle \mu =0}
and
C
=
0
{\displaystyle C=0}
.[ 3] : p. 507 [ 7] This is usually denoted
Z
∼
C
N
(
0
,
Γ
)
{\displaystyle \mathbf {Z} \sim {\mathcal {CN}}(0,\,\Gamma )}
Distribution of real and imaginary parts [ edit ]
If
Z
=
X
+
i
Y
{\displaystyle \mathbf {Z} =\mathbf {X} +i\mathbf {Y} }
is circularly-symmetric (central) complex normal, then the vector
[
X
,
Y
]
{\displaystyle [\mathbf {X} ,\mathbf {Y} ]}
is multivariate normal with covariance structure
(
X
Y
)
∼
N
(
[
0
0
]
,
1
2
[
Re
Γ
−
Im
Γ
Im
Γ
Re
Γ
]
)
{\displaystyle {\begin{pmatrix}\mathbf {X} \\\mathbf {Y} \end{pmatrix}}\ \sim \ {\mathcal {N}}{\Big (}{\begin{bmatrix}0\\0\end{bmatrix}},\ {\tfrac {1}{2}}{\begin{bmatrix}\operatorname {Re} \,\Gamma &-\operatorname {Im} \,\Gamma \\\operatorname {Im} \,\Gamma &\operatorname {Re} \,\Gamma \end{bmatrix}}{\Big )}}
where
Γ
=
E
[
Z
Z
H
]
{\displaystyle \Gamma =\operatorname {E} [\mathbf {Z} \mathbf {Z} ^{\mathrm {H} }]}
.
Probability density function [ edit ]
For nonsingular covariance matrix
Γ
{\displaystyle \Gamma }
, its distribution can also be simplified as[ 3] : p. 508
f
Z
(
z
)
=
1
π
n
det
(
Γ
)
e
−
(
z
−
μ
)
H
Γ
−
1
(
z
−
μ
)
{\displaystyle f_{\mathbf {Z} }(\mathbf {z} )={\tfrac {1}{\pi ^{n}\det(\Gamma )}}\,e^{-(\mathbf {z} -\mathbf {\mu } )^{\mathrm {H} }\Gamma ^{-1}(\mathbf {z} -\mathbf {\mu } )}}
.
Therefore, if the non-zero mean
μ
{\displaystyle \mu }
and covariance matrix
Γ
{\displaystyle \Gamma }
are unknown, a suitable log likelihood function for a single observation vector
z
{\displaystyle z}
would be
ln
(
L
(
μ
,
Γ
)
)
=
−
ln
(
det
(
Γ
)
)
−
(
z
−
μ
)
¯
′
Γ
−
1
(
z
−
μ
)
−
n
ln
(
π
)
.
{\displaystyle \ln(L(\mu ,\Gamma ))=-\ln(\det(\Gamma ))-{\overline {(z-\mu )}}'\Gamma ^{-1}(z-\mu )-n\ln(\pi ).}
The standard complex normal (defined in Eq.1 ) corresponds to the distribution of a scalar random variable with
μ
=
0
{\displaystyle \mu =0}
,
C
=
0
{\displaystyle C=0}
and
Γ
=
1
{\displaystyle \Gamma =1}
. Thus, the standard complex normal distribution has density
f
Z
(
z
)
=
1
π
e
−
z
¯
z
=
1
π
e
−
|
z
|
2
.
{\displaystyle f_{Z}(z)={\tfrac {1}{\pi }}e^{-{\overline {z}}z}={\tfrac {1}{\pi }}e^{-|z|^{2}}.}
The above expression demonstrates why the case
C
=
0
{\displaystyle C=0}
,
μ
=
0
{\displaystyle \mu =0}
is called “circularly-symmetric”. The density function depends only on the magnitude of
z
{\displaystyle z}
but not on its argument . As such, the magnitude
|
z
|
{\displaystyle |z|}
of a standard complex normal random variable will have the Rayleigh distribution and the squared magnitude
|
z
|
2
{\displaystyle |z|^{2}}
will have the exponential distribution , whereas the argument will be distributed uniformly on
[
−
π
,
π
]
{\displaystyle [-\pi ,\pi ]}
.
If
{
Z
1
,
…
,
Z
k
}
{\displaystyle \left\{\mathbf {Z} _{1},\ldots ,\mathbf {Z} _{k}\right\}}
are independent and identically distributed n -dimensional circular complex normal random vectors with
μ
=
0
{\displaystyle \mu =0}
, then the random squared norm
Q
=
∑
j
=
1
k
Z
j
H
Z
j
=
∑
j
=
1
k
‖
Z
j
‖
2
{\displaystyle Q=\sum _{j=1}^{k}\mathbf {Z} _{j}^{\mathrm {H} }\mathbf {Z} _{j}=\sum _{j=1}^{k}\|\mathbf {Z} _{j}\|^{2}}
has the generalized chi-squared distribution and the random matrix
W
=
∑
j
=
1
k
Z
j
Z
j
H
{\displaystyle W=\sum _{j=1}^{k}\mathbf {Z} _{j}\mathbf {Z} _{j}^{\mathrm {H} }}
has the complex Wishart distribution with
k
{\displaystyle k}
degrees of freedom. This distribution can be described by density function
f
(
w
)
=
det
(
Γ
−
1
)
k
det
(
w
)
k
−
n
π
n
(
n
−
1
)
/
2
∏
j
=
1
k
(
k
−
j
)
!
e
−
tr
(
Γ
−
1
w
)
{\displaystyle f(w)={\frac {\det(\Gamma ^{-1})^{k}\det(w)^{k-n}}{\pi ^{n(n-1)/2}\prod _{j=1}^{k}(k-j)!}}\ e^{-\operatorname {tr} (\Gamma ^{-1}w)}}
where
k
≥
n
{\displaystyle k\geq n}
, and
w
{\displaystyle w}
is a
n
×
n
{\displaystyle n\times n}
nonnegative-definite matrix.
Discrete univariate
with finite support with infinite support
Continuous univariate
supported on a bounded interval supported on a semi-infinite interval supported on the whole real line with support whose type varies
Mixed univariate
Multivariate (joint) Directional Degenerate and singular Families