Optimization Online


Randomized First-order Methods for Saddle Point Optimization

Cong Dang (congdd***at***ufl.edu)
Guanghui Lan (glan***at***ise.ufl.edu)

Abstract: In this paper, we present novel randomized algorithms for solving saddle point problems whose dual feasible region is a direct product of many convex sets. Our algorithms can achieve ${\cal O}(1/N)$ rate of convergence by solving only one dual subproblem at each iteration. Our algorithms can also achieve ${\cal O}(1/N^2)$ rate of convergence if a strongly convex assumption on the dual problem is made. When applied to linearly constrained problems, they need to solve only one randomly selected subproblem per iteration instead of solving all as in the Alternating Direction Method of Multipliers.

Keywords: Stochastic Optimization, Block Coordinate Descent, Nonsmooth Optimization, Saddle Point Optimization, Alternating Direction Method of Multipliers

Category 1: Convex and Nonsmooth Optimization


Download: [PDF]

Entry Submitted: 09/30/2014
Entry Accepted: 09/30/2014
Entry Last Modified: 11/13/2015

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society