  


Gradient methods for convex minimization: better rates under weaker conditions
Hui Zhang(hhuuii.zhanggmail.com) Abstract:
The convergence behavior of gradient methods for minimizing convex differentiable functions is one of the core questions in convex optimization. This paper shows that their wellknown complexities can be achieved under conditions weaker than the commonly assumed ones. We relax the common gradient Lipschitzcontinuity condition and strong convexity condition to ones that hold only over certain line segments. Specifically, we establish complexities O(R/eps) and O(sqrt{R/eps}) for the ordinary and accelerate gradient methods, respectively, assuming that grad_f is Lipschitz continuous with constant $R$ over the line segment joining x and xgrad_f/R for each x in the domain of f. Then we improve them to O((R/nu) log(1/eps)) and O(sqrt{R/nu} log(1/eps)) for function f that also satisfies the secant inequality Keywords: sublinear convergence, linear convergence, restricted Lipschitz continuity, restricted strong convexity, Nesterov acceleration, restart technique, skipping technique, sparse optimization Category 1: Convex and Nonsmooth Optimization (Convex Optimization ) Citation: Rice CAAM Tech Report TR1304 Download: [PDF] Entry Submitted: 03/20/2013 Modify/Update this entry  
Visitors  Authors  More about us  Links  
Subscribe, Unsubscribe Digest Archive Search, Browse the Repository

Submit Update Policies 
Coordinator's Board Classification Scheme Credits Give us feedback 
Optimization Journals, Sites, Societies  