Optimization Online


A general double-proximal gradient algorithm for d.c. programming

Sebastian Banert(sebastian.banert***at***univie.ac.at)
Radu Ioan Bot(radu.bot***at***univie.ac.at)

Abstract: The possibilities of exploiting the special structure of d.c. programs, which consist of optimizing the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients. In this paper we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimization problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka--\L ojasiewicz property. In the last part, we apply the algorithm to an image processing model.

Keywords: d.c. programming, Toland dual, proximal-gradient algorithm, Kurdyka--\L ojasiewicz property, convergence analysis

Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )


Download: [PDF]

Entry Submitted: 10/20/2016
Entry Accepted: 10/20/2016
Entry Last Modified: 10/20/2016

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society