-

 

 

 




Optimization Online





 

Parallel and Distributed Successive Convex Approximation Methods for Big-Data Optimization

Gesualdo Scutari (gscutari***at***purdue.edu)
Ying Sun (sun578***at***purdue.edu)

Abstract: Recent years have witnessed a surge of interest in parallel and distributed optimization methods for large-scale systems. In particular, nonconvex large-scale optimization problems have found a wide range of applications in several engineering fields. The design and the analysis of such complex, large-scale, systems pose several challenges and call for the development of new optimization models and algorithms. First, many of the aforementioned applications lead to huge-scale optimization problems. These problems are often referred to as big-data. This calls for the development of solution methods that operate in parallel, exploiting hierarchical computational architectures. Second, many networked systems are spatially (or virtually ) distributed. Due to the size of such networks and often to the proprietary regulations, these systems do not possess a single central coordinator or access point that is able to solve alone the entire optimization problem. In this setting, the goal is to develop distributed solution methods that operate seamless in-network. Third, many formulations of interest are nonconvex, with nonconvex objective functions and/or constraints. The desiderata is designing (parallel/distributed) solution methods that are easy to implement (in the sense that the computations performed by the workers are not expensive), with provable convergence to stationary solutions of the nonconvex problem under consideration. The major contribution of this paper is to put forth a general, unified, algorithmic framework, based on Successive Convex Approximation techniques, for the parallel and distributed solution of a general class of non-convex constrained (non-separable, networked) problems. The presented framework unifies and generalizes several existing SCA methods, making them appealing for a parallel/distributed implementation while offering a flexible selection of function approximants, step size schedules, and control of the computation/communication efficiency. This paper is organized according to the lectures that one of the authors delivered at the CIME Summer School on {\it Centralized and Distributed Multi-agent Optimization Models and Algorithms} held in Cetraro, Italy, June 23--27, 2014. These lectures are: I) Successive Convex Approximation Methods: Basics; II) Parallel Successive Convex Approximation Methods; and III) Distributed Successive Convex Approximation Methods.

Keywords: Nonconvex optimization, Successive convex approximation, parallel computing, distributed algorithms, big-data

Category 1: Nonlinear Optimization

Category 2: Applications -- Science and Engineering

Category 3: Convex and Nonsmooth Optimization

Citation: Lecture Notes in Mathematics, C.I.M.E, Springer Verlag series, (to appear) 2018

Download: [PDF]

Entry Submitted: 05/17/2018
Entry Accepted: 05/17/2018
Entry Last Modified: 05/17/2018

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society