Optimization Online


Subgradient methods for huge-scale optimization problems

Yurii Nesterov (Yurii.Nesterov***at***uclouvain.be)

Abstract: We consider a new class of huge-scale problems, the problems with {\em sparse subgradients}. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends {\em logarithmically} in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions. We show that the updating technique can be efficiently coupled with the simplest subgradient methods, the unconstrained minimization method by B.Polyak, and the constrained minimization scheme by N.Shor. Similar results can be obtained for a new nonsmooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments.

Keywords: Nonsmooth optimization, convex optimization, complexity bounds, subgradient methods, huge-scale problems

Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation: CORE Discussion Paper 2012/02 (January 2012)

Download: [PDF]

Entry Submitted: 01/30/2012
Entry Accepted: 02/03/2012
Entry Last Modified: 02/06/2012

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society