Natarajan dimension
In the theory of Probably Approximately Correct Machine Learning, the Natarajan dimension characterizes the complexity of learning a set of functions, generalizing from the Vapnik-Chervonenkis dimension for boolean functions to multi-class functions. Originally introduced as the Generalized Dimension by Natarajan,[1] it was subsequently renamed the Natarajan Dimension by Haussler and Long.[2]
Definition
[edit]Let be a set of functions from a set to a set . shatters a set if there exist two functions such that
- For every .
- For every , there exists a function such that
for all and for all .
The Natarajan dimension of H is the maximal cardinality of a set shattered by .
It is easy to see that if , the Natarajan dimension collapses to the Vapnik Chervonenkis dimension.
Shalev-Shwartz and Ben-David [3] present comprehensive material on multi-class learning and the Natarajan dimension, including uniform convergence and learnability.
References
[edit]- ^ Natarajan, Balas Kausik (1989). "On Learning sets and functions". Machine Learning. 4: 67–97. doi:10.1007/BF00114804.
- ^ Haussler, David; Long, Philip (1995). "A Generalization of Sauer's Lemma". Journal of Combinatorial Theory. 71: 219–240.
- ^ Shalev-Shwartz, Shai; Ben-David, Shai (2013). Understanding machine learning. From theory to algorithms. Cambridge University Press.