- On the Convergence of Asynchronous Parallel Iteration with Arbitrary Delays Zhimin Peng(zhiminpgmail.com) Yangyang Xu(yangyang.xuua.edu) Ming Yan(yanmmath.msu.edu) Wotao Yin(wotaoyinmath.ucla.edu) Abstract: Recent years have witnessed the surge of asynchronous parallel (async-parallel) iterative algorithms due to problems involving very large-scale data and a large number of decision variables. Because of asynchrony, the iterates are computed with outdated information, and the age of the outdated information, which we call \emph{delay}, is the number of times it has been updated since its creation. Almost all recent works prove convergence under the assumption of a finite maximum delay and set their stepsize parameters accordingly. However, the maximum delay is practically unknown. This paper presents convergence analysis of an async-parallel method from a probabilistic viewpoint, and it allows for arbitrarily large delays. An explicit formula of stepsize that guarantees convergence is given depending on delays' statistics. With $p+1$ identical processors, we empirically measured that delays closely follow the Poisson distribution with parameter $p$, matching our theoretical model, and thus the stepsize can be set accordingly. Simulations on both convex and nonconvex optimization problems demonstrate the validness of our analysis and also show that the existing maximum-delay induced stepsize is too conservative, often slowing down the convergence of the algorithm. Keywords: Asynchronous parallel, arbitrary delay, Nonconvex, convex, convergence analysis Category 1: Nonlinear Optimization Category 2: Optimization Software and Modeling Systems (Parallel Algorithms ) Citation: Download: [PDF]Entry Submitted: 12/13/2016Entry Accepted: 12/13/2016Entry Last Modified: 12/13/2016Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society.