Randomized First-order Methods for Saddle Point Optimization

In this paper, we present novel randomized algorithms for solving saddle point problems whose dual feasible region is a direct product of many convex sets. Our algorithms can achieve ${\cal O}(1/N)$ rate of convergence by solving only one dual subproblem at each iteration. Our algorithms can also achieve ${\cal O}(1/N^2)$ rate of convergence if a strongly convex assumption on the dual problem is made. When applied to linearly constrained problems, they need to solve only one randomly selected subproblem per iteration instead of solving all as in the Alternating Direction Method of Multipliers.

Article

Download

View Randomized First-order Methods for Saddle Point Optimization