-

 

 

 




Optimization Online





 

"Efficient" Subgradient Methods for General Convex Optimization

James Renegar(renegar***at***cornell.edu)

Abstract: A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a line-search at each iteration, avoiding the need for orthogonal projections onto the feasible region (an operation that limits practicality of traditional subgradient methods). Lipschitz continuity is not required, yet the algorithm is shown to possess a convergence rate analogous to rates for traditional methods, albeit with error measured relatively, whereas traditionally error has been absolute. The algorithm is derived using an elementary framework that can be utilized to design other such algorithms.

Keywords: subgradient method, convex optimization

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )

Citation: arXiv:1605.08712

Download: [PDF]

Entry Submitted: 06/20/2016
Entry Accepted: 06/20/2016
Entry Last Modified: 06/20/2016

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society