Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization

We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex optimization from the point of view of worst-case evaluation complexity, improving and generalizing the results of Cartis, Gould and Toint (2010,2011). To this aim, we consider a new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region variations of Newton's method as well as of their linesearch variants. For each method in this class and arbitrary accuracy threshold epsilon in (0,1), we exhibit a smooth objective function with bounded range, whose gradient is globally Lipschitz continuous and whose Hessian is alpha-Holder continuous (for given alpha in [0,1]), for which the method in question takes at least epsilon^{-(2+\alpha)/(1+\alpha)}function evaluations to generate a first iterate whose gradient is smaller than $\epsilon$ in norm. Moreover, we also construct another function on which Newton's takes epsilon^{-2} evaluations, but whose Hessian is Lipschitz continuous on the path of iterates. These examples provide lower bounds on the worst-case evaluation complexity of methods in our class when applied to smooth problems satisfying the relevant assumptions. Furthermore, for $\alpha=1$, this lower bound is of the same order in $\epsilon$ as the upper bound on the worst-case evaluation complexity of the cubic regularization method and other methods in a class of methods proposed in Curtis, Robinson and Samadi (2017) or in Royer and wright (2017), thus implying that these methods have optimal worst-case evaluation complexity within a wider class of second-order methods, and that Newton's method is suboptimal.

Citation

Technical Report, naXys, University of Namur, Namur (Belgium), 2017

Article

Download

View Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization