-

 

 

 




Optimization Online





 

Parallel stochastic line search methods with feedback for minimizing finite sums

Dragana Bajovic(dragana.bajovic***at***gmail.com)
Dusan Jakovetic(dusan.jakovetic***at***gmail.com)
Natasa Krejic(natasak***at***uns.ac.rs)
Natasa Krklec Jerinkic(natasa.krklec***at***dmi.uns.ac.rs)

Abstract: We consider unconstrained minimization of a finite sum of $N$ continuously differentiable, not necessarily convex, cost functions. Several gradient-like (and more generally, line search) methods, where the full gradient (the sum of $N$ component costs' gradients) at each iteration~$k$ is replaced with an inexpensive approximation based on a sub-sample~$\mathcal N_k$ of the component costs' gradients, are available in the literature. However, a vast majority of the methods considers pre-determined (either deterministic or random) rules for selecting subsets~$\mathcal N_k$; these rules are unrelated with the actual progress of the algorithm along iterations. In this paper, we propose a very general framework for nonmonotone line search algorithms with an \emph{adaptive} choice of sub-samples~$\mathcal N_k$. Specifically, we consider master-worker architectures with one master and $N$ workers, where each worker holds one component function~$f_i$. The master maintains the solution estimate~$x_k$ and controls the states of the workers (active or inactive) through a single scalar control parameter~$p_k$. Each active worker sends to the master the value and the gradient of its component cost, while inactive workers stay idle. Parameter~$p_k$ is proportional to the expected (average) number of active workers (which equals the average sample size), and it can increase or decrease along iterations based on a computationally inexpensive estimate of the algorithm progress. Hence, through parameter~$p_k$, the master sends \emph{feedback} to the workers about the desired sample size at the next iteration. For the proposed algorithmic framework, we show that each accumulation point of sequence~$\{x_k\}$ is a stationary point for the desired cost function, almost surely. Simulations on both synthetic and real world data sets illustrate the benefits of the proposed framework with respect to the existing non-adaptive rules.

Keywords: Variable sample methods; Stochastic optimization; Parallel algorithms; Non-convex cost functions; Feedback.

Category 1: Global Optimization

Category 2: Stochastic Programming

Citation:

Download: [PDF]

Entry Submitted: 07/11/2016
Entry Accepted: 07/18/2016
Entry Last Modified: 07/11/2016

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society