Given a training set with input data and corresponding binary class labels , the SVM[2] classifier, according to Vapnik's original formulation, satisfies the following conditions:
which is equivalent to
where is the nonlinear map from original space to the high- or infinite-dimensional space.
By substituting by its expression in the Lagrangian formed from the appropriate objective and constraints, we will get the following quadratic programming problem:
where is called the kernel function. Solving this QP problem subject to constraints in (1), we will get the hyperplane in the high-dimensional space and hence the classifier in the original space.
The least-squares version of the SVM classifier is obtained by reformulating the minimization problem as
subject to the equality constraints
The least-squares SVM (LS-SVM) classifier formulation above implicitly corresponds to a regression interpretation with binary targets .
Using , we have
with Notice, that this error would also make sense for least-squares data fitting, so that the same end results holds for the regression case.
Hence the LS-SVM classifier formulation is equivalent to
with and
Both and should be considered as hyperparameters to tune the amount of regularization versus the sum squared error. The solution does only depend on the ratio , therefore the original formulation uses only as tuning parameter. We use both and as parameters in order to provide a Bayesian interpretation to LS-SVM.
The solution of LS-SVM regressor will be obtained after we construct the Lagrangian function:
where are the Lagrange multipliers. The conditions for optimality are
where , , , and are constants. Notice that the Mercer condition holds for all and values in the polynomial and RBF case, but not for all possible choices of and in the MLP case. The scale parameters , and determine the scaling of the inputs in the polynomial, RBF and MLP kernel function. This scaling is related to the bandwidth of the kernel in statistics, where it is shown that the bandwidth is an important parameter of the generalization behavior of a kernel method.
A Bayesian interpretation of the SVM has been proposed by Smola et al. They showed that the use of different kernels in SVM can be regarded as defining different prior probability distributions on the functional space, as . Here is a constant and is the regularization operator corresponding to the selected kernel.
A general Bayesian evidence framework was developed by MacKay,[3][4][5] and MacKay has used it to the problem of regression, forward neural network and classification network. Provided data set , a model with parameter vector and a so-called hyperparameter or regularization parameter , Bayesian inference is constructed with 3 levels of inference:
In level 1, for a given value of , the first level of inference infers the posterior distribution of by Bayesian rule
The second level of inference determines the value of , by maximizing
The third level of inference in the evidence framework ranks different models by examining their posterior probabilities
We can see that Bayesian evidence framework is a unified theory for learning the model and model selection.
Kwok used the Bayesian evidence framework to interpret the formulation of SVM and model selection. And he also applied Bayesian evidence framework to support vector regression.
Now, given the data points and the hyperparameters and of the model , the model parameters and are estimated by maximizing the posterior . Applying Bayes’ rule, we obtain
where is a normalizing constant such the integral over all possible and is equal to 1.
We assume and are independent of the hyperparameter , and are conditional independent, i.e., we assume
When , the distribution of will approximate a uniform distribution. Furthermore, we assume and are Gaussian distribution, so we obtain the a priori distribution of and with to be
Here is the dimensionality of the feature space, same as the dimensionality of .
The probability of is assumed to depend only on and . We assume that the data points are independently identically distributed (i.i.d.), so that:
In order to obtain the least square cost function, it is assumed that the probability of a data point is proportional to:
A Gaussian distribution is taken for the errors as:
It is assumed that the and are determined in such a way that the class centers and are mapped onto the target -1 and +1, respectively. The projections of the class elements follow a multivariate Gaussian distribution, which have variance .
Combining the preceding expressions, and neglecting all constants, Bayes’ rule becomes
The maximum posterior density estimates and are then obtained by minimizing the negative logarithm of (26), so we arrive (10).
J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, J. Vandewalle, Least Squares Support Vector Machines, World Scientific Pub. Co., Singapore, 2002. ISBN981-238-151-1
Suykens J. A. K., Vandewalle J., Least squares support vector machine classifiers, Neural Processing Letters, vol. 9, no. 3, Jun. 1999, pp. 293–300.
Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. ISBN0-387-98780-0
MacKay, D. J. C., Probable networks and plausible predictions—A review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems, vol. 6, 1995, pp. 469–505.
www.esat.kuleuven.be/sista/lssvmlab/ "Least squares support vector machine Lab (LS-SVMlab) toolbox contains Matlab/C implementations for a number of LS-SVM algorithms".
www.kernel-machines.org "Support Vector Machines and Kernel based methods (Smola & Schölkopf)".
www.gaussianprocess.org "Gaussian Processes: Data modeling using Gaussian Process priors over functions for regression and classification (MacKay, Williams)".
www.support-vector.net "Support Vector Machines and kernel based methods (Cristianini)".
dlib: Contains a least-squares SVM implementation for large-scale datasets.