Weak subgradient algorithm for solving nonsmooth nonconvex unconstrained optimization problems

This paper presents a weak subgradient based method for solving nonconvex unconstrained optimization problems. The method uses a weak subgradient of the objective function at a current point, to generate a new one at every iteration. The concept of the weak subgradient is based on the idea of using supporting cones to the graph of a function under consideration, which replaces in some sense, the supporting hyperplanes used for subgradient notion of convex analysis. Because of this reason, the method developed in this paper, does not require convexity assumption neither on the objective function nor on the set of feasible solutions. The new method is similar to subgradient methods of convex analysis and can be considered as a generalization of those methods. The paper investigates different stepsize parameters and provides convergence theorems for all cases. The significant difficulty of subgradient methods is, an estimation of subgradients at every iteration. In this paper, a method for estimating the weak subgradients, is also presented. The new method is tested on well-known test problems from the literature and computational results are reported and interpreted.

Citation

Unpublished: Eskisehir Technical University, Department of Industrial Engineering, February 2019

Article

Download

View Weak subgradient algorithm for solving nonsmooth nonconvex unconstrained optimization problems