Optimization Online


Confidence Region for Distributed Stochastic Optimization Problem in Stochastic Gradient Tracking Method

Shengchao Zhao (zhaoshengchao***at***mail.dlut.edu.cn)
Yongchao Liu (lyc***at***dlut.edu.cn)

Abstract: Since stochastic approximation (SA) based algorithms are easy to implement and need less memory, they are very popular in distributed stochastic optimization problems. Many works have focused on the consistency of the objective values and the iterates returned by the SA based algorithms. It is of fundamental interest how to quantify the uncertainty associated with SA solutions via the confidence regions of prescribed level of significance for the true solution. In this paper, we discuss the framework of constructing the asymptotic confidence regions of the optimal solution to distributed stochastic optimization problem with a focus on distributed stochastic gradient tracking method. To attain this goal, we first present a central limit theorem for Polyak-Ruppert averaged distributed stochastic gradient tracking method. We then estimate the corresponding covariance matrix through online estimators. Finally, we provide a practical procedure to build the asymptotically confidence regions for the optimal solution. Numerical tests are also conducted to show the efficiency of the proposed methods.

Keywords: confidence regions, distributed stochastic optimization, plug-in method, batch-means method, stochastic gradient tracking method

Category 1: Stochastic Programming


Download: [PDF]

Entry Submitted: 08/01/2021
Entry Accepted: 08/01/2021
Entry Last Modified: 08/07/2021

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society