A Second-Order Information-Based Gradient and Function Sampling Method for Nonconvex, Nonsmooth Optimization

This paper has the goal to propose a gradient and function sampling method that under special circumstances moves superlinearly to a minimizer of a general class of nonsmooth and nonconvex functions. We present global and local convergence theory with illustrative examples that corroborate and elucidate the theoretical results obtained along the manuscript.

Article

Download

View A Second-Order Information-Based Gradient and Function Sampling Method for Nonconvex, Nonsmooth Optimization