  


An Adaptive Gradient Sampling Algorithm for Nonsmooth Optimization
Frank E. Curtis (frank.e.curtisgmail.com) Abstract: We present an algorithm for the minimization of f : Rn β R, assumed to be locally Lipschitz and continuously differentiable in an open dense subset D of Rn. The objective f may be nonsmooth and/or nonconvex. The method is based on the gradient sampling (GS) algorithm of Burke et al. [A robust gradient sampling algorithm for nonsmooth, nonconvex optimization, SIAM J. Optim. 15 (2005), pp. 751β779]. It differs, however, from previously proposed versions of GS in that it is variablemetric and only O(1) (not O(n)) gradient evaluations are required per iteration. Numerical experiments illustrate that the algorithm is more efficient than GS in that it consistently makes more progress towards a solution within a given number of gradient evaluations. In addition, the adaptive sampling procedure allows for warmstarting of the quadratic subproblem solver so that the average number of subproblem iterations per nonlinear iteration is also consistently reduced. Global convergence of the algorithm is proved assuming that the Hessian approximations are positive definite and bounded, an assumption shown to be true for the proposed Hessian approximation updating strategies. Keywords: unconstrained optimization, nonsmooth optimization, nonconvex optimization, gradient sampling, line search methods, quadratic optimization, warmstarting Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization ) Citation: F. E. Curtis and X. Que, βAn Adaptive Gradient Sampling Algorithm for Nonsmooth Optimization,β Optimization Methods and Software, 28(6): 13021324, 2013. Download: Entry Submitted: 10/07/2011 Modify/Update this entry  
Visitors  Authors  More about us  Links  
Subscribe, Unsubscribe Digest Archive Search, Browse the Repository

Submit Update Policies 
Coordinator's Board Classification Scheme Credits Give us feedback 
Optimization Journals, Sites, Societies  