Optimization Online


Penalized stochastic gradient methods for stochastic convex optimization with expectation constraints

Xiantao Xiao(xtxiao***at***dlut.edu.cn)

Abstract: Stochastic gradient method and its variants are simple yet effective for minimizing an expectation function over a closed convex set. However, none of these methods are applicable to solve stochastic programs with expectation constraints, since the projection onto the feasible set is prohibitive. To deal with the expectation constrained stochastic convex optimization problems, we propose a class of penalized stochastic gradient (PSG) methods in which at each iteration a stochastic gradient step is performed along the stochastic (sub)gradients of the objective function and the expectation constraint function. We prove that the basic PSG method and the mini-batch PSG method both converge almost surely to an optimal solution under mild assumptions. The favorable convergence rates of these methods are established by carefully selecting the stepsizes. We also extend PSG to solve the optimization problem with multiple expectation constraints. The efficiency of the proposed methods is validated through numerical experiments in a variety of practical instances.

Keywords: Stochastic convex optimization, expectation constraints, stochastic approximation, convergence analysis, numerical experiments.

Category 1: Stochastic Programming

Citation: Unpublished, Dalian University of Technology, Dalian 116024, China, September 2019.

Download: [PDF]

Entry Submitted: 09/12/2019
Entry Accepted: 09/12/2019
Entry Last Modified: 09/12/2019

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society