Optimization Online


Gradient-type penalty method with inertial effects for solving constrained convex optimization problems with smooth data

Radu Ioan Bot(radu.bot***at***univie.ac.at)
Ernö Robert Csetnek(ernoe.robert.csetnek***at***univie.ac.at)
Nimit Nimana(nimitn***at***hotmail.com)

Abstract: We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take advantage of the history of the iterates. We show weak convergence of the generated sequence of iterates to an optimal solution of the optimization problem, provided a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. We also prove convergence for the objective function values to the optimal objective value. The convergence analysis carried out in this paper relies on the celebrated Opial Lemma and generalized Fej\'er monotonicity techniques. We illustrate the functionality of the method via a numerical experiment addressing image classification via support vector machines.

Keywords: gradient method, penalization, Fenchel conjugate, inertial algorithm

Category 1: Convex and Nonsmooth Optimization

Category 2: Complementarity and Variational Inequalities


Download: [PDF]

Entry Submitted: 09/02/2016
Entry Accepted: 09/02/2016
Entry Last Modified: 09/02/2016

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society