  


Error Forgetting of Bregman Iteration
Wotao Yin(wotao.yinrice.edu) Abstract: This short article analyzes an interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method after a change of variable, for minimizing a convex piecewise linear function J(x) subject to linear constraints Ax=b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing J(x)+1/2Axb^k^2, where b^k is iteratively updated. In practice, the subproblem at each iteration is solved at a relatively low accuracy. Let w^k denote the error introduced by early stopping a subproblem solver at iteration k. We show that if all w^k are sufficiently small so that Bregman iteration enters the optimal face, then while on the optimal face, Bregman iteration enjoys an interesting errorforgetting property: the distance between the current point and the optimal solution set is bounded by w^{k+1}w^k, independent of all the previous errors. This property partially explains why the Bregman iterative procedure works well for sparse optimization and, in particular, for l1minimization. The errorforgetting property is unique to $J(x)$ that is a piecewise linear function (also known as a polyhedral function), and the results of this article appear to be new to the literature of the augmented Lagrangian method. Keywords: Bregman iteration, error forgetting, sparse optimization, l1 minimization, piecewise linear function, polyhedral function Category 1: Convex and Nonsmooth Optimization (Nonsmooth Optimization ) Category 2: Convex and Nonsmooth Optimization (Convex Optimization ) Citation: Rice CAAM Report TR1203, 2012 Download: [PDF] Entry Submitted: 05/01/2012 Modify/Update this entry  
Visitors  Authors  More about us  Links  
Subscribe, Unsubscribe Digest Archive Search, Browse the Repository

Submit Update Policies 
Coordinator's Board Classification Scheme Credits Give us feedback 
Optimization Journals, Sites, Societies  