-

 

 

 




Optimization Online





 

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Peter Ochs(ochs***at***cs.uni-freiburg.de)
Jalal Fadili(Jalal.Fadili***at***ensicaen.fr)
Thomas Brox(brox***at***cs.uni-freiburg.de)

Abstract: We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward–Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in image processing and machine learning.

Keywords: Bregman minimization, non-convex non-smooth optimization, model functions, convergence, Forward--Backwad Splitting, ProxDescent

Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation: arXiv:1707.02278

Download: [PDF]

Entry Submitted: 07/10/2017
Entry Accepted: 07/10/2017
Entry Last Modified: 07/10/2017

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society