-

 

 

 




Optimization Online





 

An Inexact Successive Quadratic Approximation Method for Convex L-1 Regularized Optimization

Richard Byrd(richard***at***cs.colorado.edu)
Jorge Nocedal(nocedal***at***eecs.northwestern.edu)
Figen Oztoprak(figen***at***sabanciuniv.edu)

Abstract: We study a Newton-like method for the minimization of an objective function $\phi$ that is the sum of a smooth convex function and an $\ell_1$ regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model $q_k$ of the objective function $\phi$. In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee global convergence and that can be used to control the local rate of convergence of the iteration. Our inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. We give careful consideration to the algorithm employed for the inner minimization, and report numerical results on two test sets originating in machine learning.

Keywords: convex optimization, inexact proximal Newton method

Category 1: Convex and Nonsmooth Optimization

Category 2: Nonlinear Optimization

Citation: Report 05 Optimization Center Northwestern University

Download: [PDF]

Entry Submitted: 09/09/2013
Entry Accepted: 09/09/2013
Entry Last Modified: 09/09/2013

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society