User:H.m.s.aljohani
hello
[edit]The plenty was proposed by Zou, H. and Hastie [1], a new regularisation and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the Lasso, while enjoying a similar sparsity of representation.
interpretation
[edit]The elastic net plenty contains LASSO (least absolute shrinkage and selection operator) part which defined as
Use of this penalty function has several limitations.[1] For example, in the "large p, small n" case (high-dimensional data with few examples), the LASSO selects at most n variables before it saturates. Also if there is a group of highly correlated variables, then the LASSO tends to select one variable from a group and ignore the others. To overcome these limitations, the elastic net adds a quadratic part to the penalty (), which when used alone is ridge regression. The estimates from the elastic net method are defined by
The quadratic penalty term makes the loss function strictly convex, and it therefore has a unique minimum. The elastic net method includes the LASSO and ridge regression.
Improvement
[edit]Robert G.~Aykroyd
- ^ a b Zou, Hui; Hastie, Trevor (2005). "Regularization and Variable Selection via the Elastic Net". Journal of the Royal Statistical Society, Series B: 301–320.