Approximate Primal Solutions and Rate Analysis in Dual Subgradient Methods

We study primal solutions obtained as a by-product of subgradient methods when solving the Lagrangian dual of a primal convex constrained optimization problem (possibly nonsmooth). The existing literature on the use of subgradient methods for generating primal optimal solutions is limited to the methods producing such solutions only asymptotically (i.e., in the limit as the number of subgradient iterations increases to infinity). Furthermore, no convergence rate results are known for these algorithms. In this paper, we propose and analyze dual subgradient methods using averaging to generate approximate primal optimal solutions. These algorithms use a constant stepsize as opposed to a diminishing stepsize which is dominantly used in the existing primal recovery schemes. We provide estimates on the convergence rate of the primal sequences. In particular, we provide bounds on the amount of feasibility violation of the generated approximate primal solutions. We also provide upper and lower bounds on the primal function values at the approximate solutions. The feasibility violation and primal value estimates are given per iteration, thus providing practical stopping criteria. Our analysis relies on the Slater condition and the inherited boundedness properties of the dual problem under this condition.

Citation

LIDS Technical Report 2753, Massachusetts Institute of Technology, Lab. for information and Decision Systems, March 2007

Article

Download

View Approximate Primal Solutions and Rate Analysis in Dual Subgradient Methods