-

 

 

 




Optimization Online





 

Coordinate descent algorithms

Stephen Wright (swright***at***cs.wisc.edu)

Abstract: Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This paper describes the fundamentals of the coordinate descent approach, together with variants and extensions and their convergence properties, mostly with reference to convex objectives. We pay particular attention to a certain problem structure that arises frequently in machine learning applications, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type. We also present some parallel variants and discuss their convergence properties under several models of parallel execution.

Keywords: coordinate descent, randomized algorithms, parallel numerical computing

Category 1: Convex and Nonsmooth Optimization (Convex Optimization )

Category 2: Optimization Software and Modeling Systems (Parallel Algorithms )

Citation: Technical Report, November 2014. Revised February 2015.

Download: [PDF]

Entry Submitted: 12/01/2014
Entry Accepted: 12/01/2014
Entry Last Modified: 02/15/2015

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society