Incremental Accelerated Gradient Methods for SVM Classification: Study of the Constrained Approach

We investigate constrained first order techniques for training Support Vector Machines (SVM) for online classification tasks. The methods exploit the structure of the SVM training problem and combine ideas of incremental gradient technique, gradient acceleration and successive simple calculations of Lagrange multipliers. Both primal and dual formulations are studied and compared. Experiments show that the constrained incremental algorithms working in the dual space achieve the best trade-off between prediction accuracy and training time. We perform comparisons with an unconstrained large scale learning algorithm (Pegasos stochastic gradient) to emphasize that our choice can remain competitive for large scale learning due to the very special structure of the training problem.

Citation

Accepted in Computational Management Science, available online DOI: 10.1007/s10287-013-0186-2, 2013