A First-Order Smoothing Technique for a Class of Large-Scale Linear Programs
Abstract: We study a class of linear programming (LP) problems motivated by large-scale machine learning applications. After reformulating the LP as a convex nonsmooth problem, we apply Nesterov's primal-dual smoothing technique. It turns out that the iteration complexity of the smoothing technique depends on a parameter $\th$ that arises because we need to bound the originally unbounded primal feasible set. We design a strategy that dynamically updates $\th$ to speed up the convergence. The application of our algorithm to two machine learning problems demonstrates several advantages of the smoothing technique over existing methods.
Keywords: smoothing technique, large-scale linear programming, nonsmooth optimization, machine learning
Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )
Category 2: Linear, Cone and Semidefinite Programming (Linear Programming )
Category 3: Applications -- Science and Engineering (Data-Mining )
Citation: Argonne Preprint ANL/MCS-P1971-1011, November 2011.
Entry Submitted: 11/07/2011
Modify/Update this entry
|Visitors||Authors||More about us||Links|
Search, Browse the Repository
Give us feedback
|Optimization Journals, Sites, Societies|