Optimization Online


An O(1/k) Convergence Rate for the Variable Stepsize Bregman Operator Splitting Algorithm

William W. Hager (hager***at***ufl.edu)
Maryam Yashtini (myashtini3***at***math.gatech.edu)
Hongchao Zhang (hozhang***at***math.lsu.edu)

Abstract: An earlier paper proved the convergence of a variable stepsize Bregman operator splitting algorithm (BOSVS) for minimizing $\phi(Bu)+H(u)$ where $H$ and $\phi$ are convex functions, and $\phi$ is possibly nonsmooth. The algorithm was shown to be relatively efficient when applied to partially parallel magnetic resonance image reconstruction problems. In this paper, the convergence rate of BOSVS is analyzed. When $H(u) = \|Au-f\|^2$ where $A$ is a matrix, it is shown that for an ergodic approximation ${\bf u}_k$ obtained by averaging $k$ BOSVS iterates, the error in the objective value $\phi (B\m{u}_k) + H(\m{u}_k)$ is $\C{O}(1/k)$. When the optimization problem has a unique solution $u^*$, we obtain the estimate $\|\m{u}_k - u^*\| = \C{O}(1/\sqrt{k})$. The theoretical analysis is compared to observed convergence rates for partially parallel magnetic resonance image reconstruction problems where $A$ is a large dense ill conditioned matrix.

Keywords: Nonsmooth optimization, convex optimization, BOSVS, ergodic convergence, saddle point problem, variational inequality

Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization )

Citation: University of Florida, September 23, 2013

Download: [PDF]

Entry Submitted: 04/23/2015
Entry Accepted: 04/23/2015
Entry Last Modified: 09/30/2015

Modify/Update this entry

  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository


Coordinator's Board
Classification Scheme
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society