An analysis of the superiorization method via the principle of concentration of measure
Abstract: The superiorization methodology is intended to work with input data of constrained minimization problems, i.e., a target function and a constraints set. However, it is based on an antipodal way of thinking to the thinking that leads constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce (not necessarily minimize) target function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. A guarantee that the local target function reduction steps properly accumulate to a global target function value reduction is still missing in spite of an ever-growing body of publications that supply evidence of the success of the superiorization method in various problems. We propose an analysis based on the principle of concentration of measure that attempts to alleviate the guarantee question of the superiorization method.
Keywords: Superiorization, perturbation resilience, feasibility-seeking algorithm, target function reduction, concentration of measure, superiorization matrix, linear superiorization, Hilbert-Schmidt norm, random matrix.
Category 1: Nonlinear Optimization (Constrained Nonlinear Optimization )
Category 2: Nonlinear Optimization
Category 3: Convex and Nonsmooth Optimization
Citation: Preprint, November 22, 2018. Revised: June 15, 2019.
Entry Submitted: 09/28/2019
Modify/Update this entry
|Visitors||Authors||More about us||Links|
Search, Browse the Repository
Give us feedback
|Optimization Journals, Sites, Societies|