These divergences were introduced by Alfréd Rényi[1] in the same paper where he introduced the well-known Rényi entropy. He proved that these divergences decrease in Markov processes. f-divergences were studied further independently by Csiszár (1963), Morimoto (1963) and Ali & Silvey (1966) and are sometimes known as Csiszár -divergences, Csiszár–Morimoto divergences, or Ali–Silvey distances.
Let and be two probability distributions over a space , such that , that is, is absolutely continuous with respect to . Then, for a convex function such that is finite for all , , and (which could be infinite), the -divergence of from is defined as
We call the generator of .
In concrete applications, there is usually a reference distribution on (for example, when , the reference distribution is the Lebesgue measure), such that , then we can use Radon–Nikodym theorem to take their probability densities and , giving
When there is no such reference distribution ready at hand, we can simply define , and proceed as above. This is a useful technique in more abstract proofs.
The equality here holds if and only if the transition is induced from a sufficient statistic with respect to {P, Q}.
Joint convexity: for any 0 ≤ λ ≤ 1,
This follows from the convexity of the mapping on .
Reversal by convex inversion: for any function , its convex inversion is defined as . When satisfies the defining features of a f-divergence generator ( is finite for all , , and ), then satisfies the same features, and thus defines a f-divergence . This is the "reverse" of , in the sense that for all that are absolutely continuous with respect to each other.
In this way, every f-divergence can be turned symmetric by . For example, performing this symmetrization turns KL-divergence into Jeffreys divergence.
In particular, the monotonicity implies that if a Markov process has a positive equilibrium probability distribution then is a monotonic (non-increasing) function of time, where the probability distribution is a solution of the Kolmogorov forward equations (or Master equation), used to describe the time evolution of the probability distribution in the Markov process. This means that all f-divergences are the Lyapunov functions of the Kolmogorov forward equations. The converse statement is also true: If is a Lyapunov function for all Markov chains with positive equilibrium and is of the trace-form
() then , for some convex function f.[3][4] For example, Bregman divergences in general do not have such property and can increase in Markov processes.[5]
Let be the convex conjugate of . Let be the effective domain of
, that is, . Then we have two variational representations of , which we describe below.
Using this theorem on total variation distance, with generator its convex conjugate is , and we obtain
For chi-squared divergence, defined by , we obtain
Since the variation term is not affine-invariant in , even though the domain over which varies is affine-invariant, we can use up the affine-invariance to obtain a leaner expression.
For -divergence with , we have , with range . Its convex conjugate is with range , where .
Applying this theorem yields, after substitution with ,
or, releasing the constraint on ,
Setting yields the variational representation of -divergence obtained above.
The domain over which varies is not affine-invariant in general, unlike the -divergence case. The -divergence is special, since in that case, we can remove the from .
For general , the domain over which varies is merely scale invariant. Similar to above, we can replace by , and take minimum over to obtain
Setting , and performing another substitution by , yields two variational representations of the squared Hellinger distance:
Applying this theorem to the KL-divergence, defined by , yields
This is strictly less efficient than the Donsker–Varadhan representation
This defect is fixed by the next theorem.
The following table lists many of the common divergences between probability distributions and the possible generating functions to which they correspond. Notably, except for total variation distance, all others are special cases of -divergence, or linear sums of -divergences.
For each f-divergence , its generating function is not uniquely defined, but only up to , where is any real constant. That is, for any that generates an f-divergence, we have . This freedom is not only convenient, but actually necessary.
Let be the generator of -divergence, then and are convex inversions of each other, so . In particular, this shows that the squared Hellinger distance and Jensen-Shannon divergence are symmetric.
In the literature, the -divergences are sometimes parametrized as
which is equivalent to the parametrization in this page by substituting .
A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines the official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. For a large class of rational players the expected profit rate has the same general form as the ƒ-divergence.[8]
^Rényi, Alfréd (1961). On measures of entropy and information(PDF). The 4th Berkeley Symposium on Mathematics, Statistics and Probability, 1960. Berkeley, CA: University of California Press. pp. 547–561. Eq. (4.20)
^Jiao, Jiantao; Courtade, Thomas; No, Albert; Venkat, Kartik; Weissman, Tsachy (December 2014). "Information Measures: the Curious Case of the Binary Alphabet". IEEE Transactions on Information Theory. 60 (12): 7616–7626. arXiv:1404.6810. doi:10.1109/TIT.2014.2360184. ISSN0018-9448. S2CID13108908.
^Sriperumbudur, Bharath K.; Fukumizu, Kenji; Gretton, Arthur; Schölkopf, Bernhard; Lanckriet, Gert R. G. (2009). "On integral probability metrics, φ-divergences and binary classification". arXiv:0901.2698 [cs.IT].
Csiszár, I. (1963). "Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizitat von Markoffschen Ketten". Magyar. Tud. Akad. Mat. Kutato Int. Kozl. 8: 85–108.
Ali, S. M.; Silvey, S. D. (1966). "A general class of coefficients of divergence of one distribution from another". Journal of the Royal Statistical Society, Series B. 28 (1): 131–142. JSTOR2984279. MR0196777.
Csiszár, I. (1967). "Information-type measures of difference of probability distributions and indirect observation". Studia Scientiarum Mathematicarum Hungarica. 2: 229–318.