- Randomized First-order Methods for Saddle Point Optimization Cong Dang (congddufl.edu) Guanghui Lan (glanise.ufl.edu) Abstract: In this paper, we present novel randomized algorithms for solving saddle point problems whose dual feasible region is a direct product of many convex sets. Our algorithms can achieve ${\cal O}(1/N)$ rate of convergence by solving only one dual subproblem at each iteration. Our algorithms can also achieve ${\cal O}(1/N^2)$ rate of convergence if a strongly convex assumption on the dual problem is made. When applied to linearly constrained problems, they need to solve only one randomly selected subproblem per iteration instead of solving all as in the Alternating Direction Method of Multipliers. Keywords: Stochastic Optimization, Block Coordinate Descent, Nonsmooth Optimization, Saddle Point Optimization, Alternating Direction Method of Multipliers Category 1: Convex and Nonsmooth Optimization Citation: Download: [PDF]Entry Submitted: 09/30/2014Entry Accepted: 09/30/2014Entry Last Modified: 11/13/2015Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society.