Optimization Online


An analysis of the superiorization method via the principle of concentration of measure

Yair Censor(yair***at***math.haifa.ac.il)
Eliahu Levy(eliahu***at***math.technion.ac.il)

Abstract: The superiorization methodology is intended to work with input data of constrained minimization problems, i.e., a target function and a constraints set. However, it is based on an antipodal way of thinking to the thinking that leads constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce (not necessarily minimize) target function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. A guarantee that the local target function reduction steps properly accumulate to a global target function value reduction is still missing in spite of an ever-growing body of publications that supply evidence of the success of the superiorization method in various problems. We propose an analysis based on the principle of concentration of measure that attempts to alleviate the guarantee question of the superiorization method.

Keywords: Superiorization, perturbation resilience, feasibility-seeking algorithm, target function reduction, concentration of measure, superiorization matrix, linear superiorization, Hilbert-Schmidt norm, random matrix.

Category 1: Nonlinear Optimization (Constrained Nonlinear Optimization )

Category 2: Nonlinear Optimization

Category 3: Convex and Nonsmooth Optimization

Citation: Preprint, November 22, 2018. Revised: June 15, 2019.

Download: [PDF]

Entry Submitted: 09/28/2019
Entry Accepted: 09/28/2019
Entry Last Modified: 09/28/2019

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society