On Relaxation of Some Customized Proximal Point Algorithms for Convex Minimization: From Variational Inequality Perspective

The proximal point algorithm (PPA) is a fundamental method for convex programming. When PPA applied to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define the proximal regularization, so that the computational burden of the resulted PPA can be reduced, and in most cases, even admit closed form or efficient solutions. This idea results in the so called customized PPA (also known as preconditioned PPA), and it covers the linearized ALM, the primal-dual hybrid gradient algorithm (PDHG), ADMM as special cases. To accelerate the convergence of the customized PPA, a simple scheme is to use a relaxation parameter $\gamma\in (0,2)$ to over-relax the essential variables (the variables that are really involved in the iterations), and has worked well in practice. However, when the primal variables are included in the essential, this relaxation strategy may destroy the inherent structure we may preferred in the primal variables, e.g., sparsity, low rank, non-negative, etc. In this paper we treat some customized PPA algorithms uniformly by a mixed variational inequality approach, and propose an alternative relaxation strategy to speedup the convergence. Our analysis is based on correcting the dual variables individually. From variational inequality perspective, we establish the global convergence and a worst-case $\mathcal{O}(1/t)$ convergence rate for this series of relaxed algorithms. Finally, we demonstrate the performance improvements by some numerical results.

Article

Download

View On Relaxation of Some Customized Proximal Point Algorithms for Convex Minimization: From Variational Inequality Perspective