Optimization Online


Exploiting separability in large-scale linear support vector machine training

Kristian Woodsend (K.Woodsend***at***ed.ac.uk)
Jacek Gondzio (J.Gondzio***at***ed.ac.uk)

Abstract: Linear support vector machine training can be represented as a large quadratic program. We present an efficient and numerically stable algorithm for this problem using interior point methods, which requires only O(n) operations per iteration. Through exploiting the separability of the Hessian, we provide a unified approach, from an optimization perspective, to 1-norm classification, 2-norm classification, universum classification, ordinal regression and epsilon-insensitive regression. Our approach has the added advantage of obtaining the hyperplane weights and bias directly from the solver. Numerical experiments indicate that, in contrast to existing methods, the algorithm is largely unaffected by noisy data, and they show training times for our implementation are consistent and highly competitive. We discuss the effect of using multiple correctors, and monitoring the angle of the normal to the hyperplane to determine termination.

Keywords: Support Vector Machines, Interior Point Method, Separable QP, Large-scale

Category 1: Applications -- Science and Engineering (Data-Mining )

Category 2: Nonlinear Optimization (Quadratic Programming )

Citation: Technical Report MS-07-002 School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh The Kings Buildings, Edinburgh, EH9 3JZ, UK August 7, 2007, revised April 9, 2009

Download: [PDF]

Entry Submitted: 08/08/2007
Entry Accepted: 08/08/2007
Entry Last Modified: 04/20/2009

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society