Misplaced Pages

Least-squares support vector machine

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Least squares support vector machine)
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (December 2021) (Learn how and when to remove this message)

Least-squares support-vector machines (LS-SVM) for statistics and in statistical modeling, are least-squares versions of support-vector machines (SVM), which are a set of related supervised learning methods that analyze data and recognize patterns, and which are used for classification and regression analysis. In this version one finds the solution by solving a set of linear equations instead of a convex quadratic programming (QP) problem for classical SVMs. Least-squares SVM classifiers were proposed by Johan Suykens and Joos Vandewalle. LS-SVMs are a class of kernel-based learning methods.

From support-vector machine to least-squares support-vector machine

Given a training set { x i , y i } i = 1 N {\displaystyle \{x_{i},y_{i}\}_{i=1}^{N}} with input data x i R n {\displaystyle x_{i}\in \mathbb {R} ^{n}} and corresponding binary class labels y i { 1 , + 1 } {\displaystyle y_{i}\in \{-1,+1\}} , the SVM classifier, according to Vapnik's original formulation, satisfies the following conditions:

The spiral data: y i = 1 {\displaystyle y_{i}=1} for blue data point, y i = 1 {\displaystyle y_{i}=-1} for red data point
{ w T ϕ ( x i ) + b 1 , if  y i = + 1 , w T ϕ ( x i ) + b 1 , if  y i = 1 , {\displaystyle {\begin{cases}w^{T}\phi (x_{i})+b\geq 1,&{\text{if }}\quad y_{i}=+1,\\w^{T}\phi (x_{i})+b\leq -1,&{\text{if }}\quad y_{i}=-1,\end{cases}}}

which is equivalent to

y i [ w T ϕ ( x i ) + b ] 1 , i = 1 , , N , {\displaystyle y_{i}\left\geq 1,\quad i=1,\ldots ,N,}

where ϕ ( x ) {\displaystyle \phi (x)} is the nonlinear map from original space to the high- or infinite-dimensional space.

Inseparable data

In case such a separating hyperplane does not exist, we introduce so-called slack variables ξ i {\displaystyle \xi _{i}} such that

{ y i [ w T ϕ ( x i ) + b ] 1 ξ i , i = 1 , , N , ξ i 0 , i = 1 , , N . {\displaystyle {\begin{cases}y_{i}\left\geq 1-\xi _{i},&i=1,\ldots ,N,\\\xi _{i}\geq 0,&i=1,\ldots ,N.\end{cases}}}
The result of the SVM classifier

According to the structural risk minimization principle, the risk bound is minimized by the following minimization problem:

min J 1 ( w , ξ ) = 1 2 w T w + c i = 1 N ξ i , {\displaystyle \min J_{1}(w,\xi )={\frac {1}{2}}w^{T}w+c\sum \limits _{i=1}^{N}\xi _{i},}
Subject to  { y i [ w T ϕ ( x i ) + b ] 1 ξ i , i = 1 , , N , ξ i 0 , i = 1 , , N , {\displaystyle {\text{Subject to }}{\begin{cases}y_{i}\left\geq 1-\xi _{i},&i=1,\ldots ,N,\\\xi _{i}\geq 0,&i=1,\ldots ,N,\end{cases}}}

To solve this problem, we could construct the Lagrangian function:

L 1 ( w , b , ξ , α , β ) = 1 2 w T w + c i = 1 N ξ i i = 1 N α i { y i [ w T ϕ ( x i ) + b ] 1 + ξ i } i = 1 N β i ξ i , {\displaystyle L_{1}(w,b,\xi ,\alpha ,\beta )={\frac {1}{2}}w^{T}w+c\sum \limits _{i=1}^{N}{\xi _{i}}-\sum \limits _{i=1}^{N}\alpha _{i}\left\{y_{i}\left-1+\xi _{i}\right\}-\sum \limits _{i=1}^{N}\beta _{i}\xi _{i},}

where α i 0 ,   β i 0   ( i = 1 , , N ) {\displaystyle \alpha _{i}\geq 0,\ \beta _{i}\geq 0\ (i=1,\ldots ,N)} are the Lagrangian multipliers. The optimal point will be in the saddle point of the Lagrangian function, and then we obtain

{ L 1 w = 0 w = i = 1 N α i y i ϕ ( x i ) , L 1 b = 0 i = 1 N α i y i = 0 , L 1 ξ i = 0 α i + β i = c , i = 1 , , N 0 α i c , i = 1 , , N . {\displaystyle {\begin{cases}{\frac {\partial L_{1}}{\partial w}}=0\quad \to \quad w=\sum \limits _{i=1}^{N}\alpha _{i}y_{i}\phi (x_{i}),\\{\frac {\partial L_{1}}{\partial b}}=0\quad \to \quad \sum \limits _{i=1}^{N}\alpha _{i}y_{i}=0,\\{\frac {\partial L_{1}}{\partial \xi _{i}}}=0\quad \to \quad \alpha _{i}+\beta _{i}=c,\;i=1,\ldots ,N\quad \to \quad 0\leq \alpha _{i}\leq c,\;i=1,\ldots ,N.\end{cases}}} (1)

By substituting w {\displaystyle w} by its expression in the Lagrangian formed from the appropriate objective and constraints, we will get the following quadratic programming problem:

max Q 1 ( α ) = 1 2 i , j = 1 N α i α j y i y j K ( x i , x j ) + i = 1 N α i , {\displaystyle \max Q_{1}(\alpha )=-{\frac {1}{2}}\sum \limits _{i,j=1}^{N}{\alpha _{i}\alpha _{j}y_{i}y_{j}K(x_{i},x_{j})}+\sum \limits _{i=1}^{N}\alpha _{i},}

where K ( x i , x j ) = ϕ ( x i ) , ϕ ( x j ) {\displaystyle K(x_{i},x_{j})=\left\langle \phi (x_{i}),\phi (x_{j})\right\rangle } is called the kernel function. Solving this QP problem subject to constraints in (1), we will get the hyperplane in the high-dimensional space and hence the classifier in the original space.

Least-squares SVM formulation

The least-squares version of the SVM classifier is obtained by reformulating the minimization problem as

min J 2 ( w , b , e ) = μ 2 w T w + ζ 2 i = 1 N e i 2 , {\displaystyle \min J_{2}(w,b,e)={\frac {\mu }{2}}w^{T}w+{\frac {\zeta }{2}}\sum \limits _{i=1}^{N}e_{i}^{2},}

subject to the equality constraints

y i [ w T ϕ ( x i ) + b ] = 1 e i , i = 1 , , N . {\displaystyle y_{i}\left=1-e_{i},\quad i=1,\ldots ,N.}

The least-squares SVM (LS-SVM) classifier formulation above implicitly corresponds to a regression interpretation with binary targets y i = ± 1 {\displaystyle y_{i}=\pm 1} .

Using y i 2 = 1 {\displaystyle y_{i}^{2}=1} , we have

i = 1 N e i 2 = i = 1 N ( y i e i ) 2 = i = 1 N e i 2 = i = 1 N ( y i ( w T ϕ ( x i ) + b ) ) 2 , {\displaystyle \sum \limits _{i=1}^{N}e_{i}^{2}=\sum \limits _{i=1}^{N}(y_{i}e_{i})^{2}=\sum \limits _{i=1}^{N}e_{i}^{2}=\sum \limits _{i=1}^{N}\left(y_{i}-(w^{T}\phi (x_{i})+b)\right)^{2},}

with e i = y i ( w T ϕ ( x i ) + b ) . {\displaystyle e_{i}=y_{i}-(w^{T}\phi (x_{i})+b).} Notice, that this error would also make sense for least-squares data fitting, so that the same end results holds for the regression case.

Hence the LS-SVM classifier formulation is equivalent to

J 2 ( w , b , e ) = μ E W + ζ E D {\displaystyle J_{2}(w,b,e)=\mu E_{W}+\zeta E_{D}}

with E W = 1 2 w T w {\displaystyle E_{W}={\frac {1}{2}}w^{T}w} and E D = 1 2 i = 1 N e i 2 = 1 2 i = 1 N ( y i ( w T ϕ ( x i ) + b ) ) 2 . {\displaystyle E_{D}={\frac {1}{2}}\sum \limits _{i=1}^{N}e_{i}^{2}={\frac {1}{2}}\sum \limits _{i=1}^{N}\left(y_{i}-(w^{T}\phi (x_{i})+b)\right)^{2}.}

The result of the LS-SVM classifier

Both μ {\displaystyle \mu } and ζ {\displaystyle \zeta } should be considered as hyperparameters to tune the amount of regularization versus the sum squared error. The solution does only depend on the ratio γ = ζ / μ {\displaystyle \gamma =\zeta /\mu } , therefore the original formulation uses only γ {\displaystyle \gamma } as tuning parameter. We use both μ {\displaystyle \mu } and ζ {\displaystyle \zeta } as parameters in order to provide a Bayesian interpretation to LS-SVM.

The solution of LS-SVM regressor will be obtained after we construct the Lagrangian function:

{ L 2 ( w , b , e , α ) = J 2 ( w , e ) i = 1 N α i { [ w T ϕ ( x i ) + b ] + e i y i } , = 1 2 w T w + γ 2 i = 1 N e i 2 i = 1 N α i { [ w T ϕ ( x i ) + b ] + e i y i } , {\displaystyle {\begin{cases}L_{2}(w,b,e,\alpha )\;=J_{2}(w,e)-\sum \limits _{i=1}^{N}\alpha _{i}\left\{{\left+e_{i}-y_{i}}\right\},\\\quad \quad \quad \quad \quad \;={\frac {1}{2}}w^{T}w+{\frac {\gamma }{2}}\sum \limits _{i=1}^{N}e_{i}^{2}-\sum \limits _{i=1}^{N}\alpha _{i}\left\{\left+e_{i}-y_{i}\right\},\end{cases}}}

where α i R {\displaystyle \alpha _{i}\in \mathbb {R} } are the Lagrange multipliers. The conditions for optimality are

{ L 2 w = 0 w = i = 1 N α i ϕ ( x i ) , L 2 b = 0 i = 1 N α i = 0 , L 2 e i = 0 α i = γ e i , i = 1 , , N , L 2 α i = 0 y i = w T ϕ ( x i ) + b + e i , i = 1 , , N . {\displaystyle {\begin{cases}{\frac {\partial L_{2}}{\partial w}}=0\quad \to \quad w=\sum \limits _{i=1}^{N}\alpha _{i}\phi (x_{i}),\\{\frac {\partial L_{2}}{\partial b}}=0\quad \to \quad \sum \limits _{i=1}^{N}\alpha _{i}=0,\\{\frac {\partial L_{2}}{\partial e_{i}}}=0\quad \to \quad \alpha _{i}=\gamma e_{i},\;i=1,\ldots ,N,\\{\frac {\partial L_{2}}{\partial \alpha _{i}}}=0\quad \to \quad y_{i}=w^{T}\phi (x_{i})+b+e_{i},\,i=1,\ldots ,N.\end{cases}}}

Elimination of w {\displaystyle w} and e {\displaystyle e} will yield a linear system instead of a quadratic programming problem:

[ 0 1 N T 1 N Ω + γ 1 I N ] [ b α ] = [ 0 Y ] , {\displaystyle \left\left=\left,}

with Y = [ y 1 , , y N ] T {\displaystyle Y=^{T}} , 1 N = [ 1 , , 1 ] T {\displaystyle 1_{N}=^{T}} and α = [ α 1 , , α N ] T {\displaystyle \alpha =^{T}} . Here, I N {\displaystyle I_{N}} is an N × N {\displaystyle N\times N} identity matrix, and Ω R N × N {\displaystyle \Omega \in \mathbb {R} ^{N\times N}} is the kernel matrix defined by Ω i j = ϕ ( x i ) T ϕ ( x j ) = K ( x i , x j ) {\displaystyle \Omega _{ij}=\phi (x_{i})^{T}\phi (x_{j})=K(x_{i},x_{j})} .

Kernel function K

For the kernel function K(•, •) one typically has the following choices:

  • Linear kernel : K ( x , x i ) = x i T x , {\displaystyle K(x,x_{i})=x_{i}^{T}x,}
  • Polynomial kernel of degree d {\displaystyle d} : K ( x , x i ) = ( 1 + x i T x / c ) d , {\displaystyle K(x,x_{i})=\left({1+x_{i}^{T}x/c}\right)^{d},}
  • Radial basis function RBF kernel : K ( x , x i ) = exp ( x x i 2 / σ 2 ) , {\displaystyle K(x,x_{i})=\exp \left({-\left\|{x-x_{i}}\right\|^{2}/\sigma ^{2}}\right),}
  • MLP kernel : K ( x , x i ) = tanh ( k x i T x + θ ) , {\displaystyle K(x,x_{i})=\tanh \left({k\,x_{i}^{T}x+\theta }\right),}

where d {\displaystyle d} , c {\displaystyle c} , σ {\displaystyle \sigma } , k {\displaystyle k} and θ {\displaystyle \theta } are constants. Notice that the Mercer condition holds for all c , σ R + {\displaystyle c,\sigma \in \mathbb {R} ^{+}} and d N {\displaystyle d\in N} values in the polynomial and RBF case, but not for all possible choices of k {\displaystyle k} and θ {\displaystyle \theta } in the MLP case. The scale parameters c {\displaystyle c} , σ {\displaystyle \sigma } and k {\displaystyle k} determine the scaling of the inputs in the polynomial, RBF and MLP kernel function. This scaling is related to the bandwidth of the kernel in statistics, where it is shown that the bandwidth is an important parameter of the generalization behavior of a kernel method.

Bayesian interpretation for LS-SVM

A Bayesian interpretation of the SVM has been proposed by Smola et al. They showed that the use of different kernels in SVM can be regarded as defining different prior probability distributions on the functional space, as P [ f ] exp ( β P ^ f 2 ) {\displaystyle P\propto \exp \left({-\beta \left\|{{\hat {P}}f}\right\|^{2}}\right)} . Here β > 0 {\displaystyle \beta >0} is a constant and P ^ {\displaystyle {\hat {P}}} is the regularization operator corresponding to the selected kernel.

A general Bayesian evidence framework was developed by MacKay, and MacKay has used it to the problem of regression, forward neural network and classification network. Provided data set D {\displaystyle D} , a model M {\displaystyle \mathbb {M} } with parameter vector w {\displaystyle w} and a so-called hyperparameter or regularization parameter λ {\displaystyle \lambda } , Bayesian inference is constructed with 3 levels of inference:

  • In level 1, for a given value of λ {\displaystyle \lambda } , the first level of inference infers the posterior distribution of w {\displaystyle w} by Bayesian rule
p ( w | D , λ , M ) p ( D | w , M ) p ( w | λ , M ) . {\displaystyle p(w|D,\lambda ,\mathbb {M} )\propto p(D|w,\mathbb {M} )p(w|\lambda ,\mathbb {M} ).}
  • The second level of inference determines the value of λ {\displaystyle \lambda } , by maximizing
p ( λ | D , M ) p ( D | λ , M ) p ( λ | M ) . {\displaystyle p(\lambda |D,\mathbb {M} )\propto p(D|\lambda ,\mathbb {M} )p(\lambda |\mathbb {M} ).}
  • The third level of inference in the evidence framework ranks different models by examining their posterior probabilities
p ( M | D ) p ( D | M ) p ( M ) . {\displaystyle p(\mathbb {M} |D)\propto p(D|\mathbb {M} )p(\mathbb {M} ).}

We can see that Bayesian evidence framework is a unified theory for learning the model and model selection. Kwok used the Bayesian evidence framework to interpret the formulation of SVM and model selection. And he also applied Bayesian evidence framework to support vector regression.

Now, given the data points { x i , y i } i = 1 N {\displaystyle \{x_{i},y_{i}\}_{i=1}^{N}} and the hyperparameters μ {\displaystyle \mu } and ζ {\displaystyle \zeta } of the model M {\displaystyle \mathbb {M} } , the model parameters w {\displaystyle w} and b {\displaystyle b} are estimated by maximizing the posterior p ( w , b | D , log μ , log ζ , M ) {\displaystyle p(w,b|D,\log \mu ,\log \zeta ,\mathbb {M} )} . Applying Bayes’ rule, we obtain

p ( w , b | D , log μ , log ζ , M ) = p ( D | w , b , log μ , log ζ , M ) p ( w , b | log μ , log ζ , M ) p ( D | log μ , log ζ , M ) , {\displaystyle p(w,b|D,\log \mu ,\log \zeta ,\mathbb {M} )={\frac {p(D|w,b,\log \mu ,\log \zeta ,\mathbb {M} )p(w,b|\log \mu ,\log \zeta ,\mathbb {M} )}{p(D|\log \mu ,\log \zeta ,\mathbb {M} )}},}

where p ( D | log μ , log ζ , M ) {\displaystyle p(D|\log \mu ,\log \zeta ,\mathbb {M} )} is a normalizing constant such the integral over all possible w {\displaystyle w} and b {\displaystyle b} is equal to 1. We assume w {\displaystyle w} and b {\displaystyle b} are independent of the hyperparameter ζ {\displaystyle \zeta } , and are conditional independent, i.e., we assume

p ( w , b | log μ , log ζ , M ) = p ( w | log μ , M ) p ( b | log σ b , M ) . {\displaystyle p(w,b|\log \mu ,\log \zeta ,\mathbb {M} )=p(w|\log \mu ,\mathbb {M} )p(b|\log \sigma _{b},\mathbb {M} ).}

When σ b {\displaystyle \sigma _{b}\to \infty } , the distribution of b {\displaystyle b} will approximate a uniform distribution. Furthermore, we assume w {\displaystyle w} and b {\displaystyle b} are Gaussian distribution, so we obtain the a priori distribution of w {\displaystyle w} and b {\displaystyle b} with σ b {\displaystyle \sigma _{b}\to \infty } to be

p ( w , b | log μ , ) = ( μ 2 π ) n f 2 exp ( μ 2 w T w ) 1 2 π σ b exp ( b 2 2 σ b ) ( μ 2 π ) n f 2 exp ( μ 2 w T w ) . {\displaystyle {\begin{array}{l}p(w,b|\log \mu ,)=\left({\frac {\mu }{2\pi }}\right)^{\frac {n_{f}}{2}}\exp \left({-{\frac {\mu }{2}}w^{T}w}\right){\frac {1}{\sqrt {2\pi \sigma _{b}}}}\exp \left({-{\frac {b^{2}}{2\sigma _{b}}}}\right)\\\quad \quad \quad \quad \quad \quad \quad \propto \left({\frac {\mu }{2\pi }}\right)^{\frac {n_{f}}{2}}\exp \left({-{\frac {\mu }{2}}w^{T}w}\right)\end{array}}.}

Here n f {\displaystyle n_{f}} is the dimensionality of the feature space, same as the dimensionality of w {\displaystyle w} .

The probability of p ( D | w , b , log μ , log ζ , M ) {\displaystyle p(D|w,b,\log \mu ,\log \zeta ,\mathbb {M} )} is assumed to depend only on w , b , ζ {\displaystyle w,b,\zeta } and M {\displaystyle \mathbb {M} } . We assume that the data points are independently identically distributed (i.i.d.), so that:

p ( D | w , b , log ζ , M ) = i = 1 N p ( x i , y i | w , b , log ζ , M ) . {\displaystyle p(D|w,b,\log \zeta ,\mathbb {M} )=\prod \limits _{i=1}^{N}{p(x_{i},y_{i}|w,b,\log \zeta ,\mathbb {M} )}.}

In order to obtain the least square cost function, it is assumed that the probability of a data point is proportional to:

p ( x i , y i | w , b , log ζ , M ) p ( e i | w , b , log ζ , M ) . {\displaystyle p(x_{i},y_{i}|w,b,\log \zeta ,\mathbb {M} )\propto p(e_{i}|w,b,\log \zeta ,\mathbb {M} ).}

A Gaussian distribution is taken for the errors e i = y i ( w T ϕ ( x i ) + b ) {\displaystyle e_{i}=y_{i}-(w^{T}\phi (x_{i})+b)} as:

p ( e i | w , b , log ζ , M ) = ζ 2 π exp ( ζ e i 2 2 ) . {\displaystyle p(e_{i}|w,b,\log \zeta ,\mathbb {M} )={\sqrt {\frac {\zeta }{2\pi }}}\exp \left({-{\frac {\zeta e_{i}^{2}}{2}}}\right).}

It is assumed that the w {\displaystyle w} and b {\displaystyle b} are determined in such a way that the class centers m ^ {\displaystyle {\hat {m}}_{-}} and m ^ + {\displaystyle {\hat {m}}_{+}} are mapped onto the target -1 and +1, respectively. The projections w T ϕ ( x ) + b {\displaystyle w^{T}\phi (x)+b} of the class elements ϕ ( x ) {\displaystyle \phi (x)} follow a multivariate Gaussian distribution, which have variance 1 / ζ {\displaystyle 1/\zeta } .

Combining the preceding expressions, and neglecting all constants, Bayes’ rule becomes

p ( w , b | D , log μ , log ζ , M ) exp ( μ 2 w T w ζ 2 i = 1 N e i 2 ) = exp ( J 2 ( w , b ) ) . {\displaystyle p(w,b|D,\log \mu ,\log \zeta ,\mathbb {M} )\propto \exp(-{\frac {\mu }{2}}w^{T}w-{\frac {\zeta }{2}}\sum \limits _{i=1}^{N}{e_{i}^{2}})=\exp(-J_{2}(w,b)).}

The maximum posterior density estimates w M P {\displaystyle w_{MP}} and b M P {\displaystyle b_{MP}} are then obtained by minimizing the negative logarithm of (26), so we arrive (10).

References

  1. Suykens, J. A. K.; Vandewalle, J. (1999) "Least squares support vector machine classifiers", Neural Processing Letters, 9 (3), 293–300.
  2. Vapnik, V. The nature of statistical learning theory. Springer-Verlag, New York, 1995.
  3. MacKay, D. J. C. Bayesian Interpolation. Neural Computation, 4(3): 415–447, May 1992.
  4. MacKay, D. J. C. A practical Bayesian framework for backpropagation networks. Neural Computation, 4(3): 448–472, May 1992.
  5. MacKay, D. J. C. The evidence framework applied to classification networks. Neural Computation, 4(5): 720–736, Sep. 1992.

Bibliography

  • J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, J. Vandewalle, Least Squares Support Vector Machines, World Scientific Pub. Co., Singapore, 2002. ISBN 981-238-151-1
  • Suykens J. A. K., Vandewalle J., Least squares support vector machine classifiers, Neural Processing Letters, vol. 9, no. 3, Jun. 1999, pp. 293–300.
  • Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. ISBN 0-387-98780-0
  • MacKay, D. J. C., Probable networks and plausible predictions—A review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems, vol. 6, 1995, pp. 469–505.

External links

  • www.esat.kuleuven.be/sista/lssvmlab/ "Least squares support vector machine Lab (LS-SVMlab) toolbox contains Matlab/C implementations for a number of LS-SVM algorithms".
  • www.kernel-machines.org "Support Vector Machines and Kernel based methods (Smola & Schölkopf)".
  • www.gaussianprocess.org "Gaussian Processes: Data modeling using Gaussian Process priors over functions for regression and classification (MacKay, Williams)".
  • www.support-vector.net "Support Vector Machines and kernel based methods (Cristianini)".
  • dlib: Contains a least-squares SVM implementation for large-scale datasets.
Categories: