  


Optimal Newtontype methods for nonconvex smooth optimization problems
Coralia Cartis(coralia.cartised.ac.uk) Abstract: We consider a general class of secondorder iterations for unconstrained optimization that includes regularization and trustregion variants of Newton's method. For each method in this class, we exhibit a smooth, boundedbelow objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is $\alpha$Holder continuous (for given $\alpha\in [0,1]$) on the path of the iterates, for which the method in question takes at least $\epsilon^{(2+\alpha)/(1+\alpha)}$ functionevaluations to generate a first iterate whose gradient is smaller than $\epsilon$ in norm. This provides a lower bound on the evaluation complexity of secondorder methods in our class when applied to smooth problems satisfying our assumptions. Furthermore, for $\alpha=1$, this lower bound is of the same order in $\epsilon$ as the upper bound on the evaluation complexity of cubic regularization, thus implying cubic regularization has optimal worstcase evaluation complexity within our class of secondorder methods. Keywords: nonlinear optimization, global complexity bounds, optimal algorithms Category 1: Nonlinear Optimization Citation: ERGO Technical Report 11009, School of Mathematics, University of Edinburgh, 2011. Download: [PDF] Entry Submitted: 06/22/2011 Modify/Update this entry  
Visitors  Authors  More about us  Links  
Subscribe, Unsubscribe Digest Archive Search, Browse the Repository

Submit Update Policies 
Coordinator's Board Classification Scheme Credits Give us feedback 
Optimization Journals, Sites, Societies  