-

 

 

 




Optimization Online





 

Double-Regularization Proximal Methods, with Complementarity Applications

Paulo J. S. Silva (rsilva***at***ime.usp.br)
Jonathan Eckstein (jeckstei***at***rutcor.rutgers.edu)

Abstract: We consider the variational inequality problem formed by a general set-valued maximal monotone operator and a possibly unbounded ``box'' in $R^n$, and study its solution by proximal methods whose distance regularizations are coercive over the box. We prove convergence for a class of double regularizations generalizing a previously-proposed class of Auslender et al. We apply this class of regularizations to complementarity problems using a dual formulation, leading to the broadened class of generalized augmented Lagrangian methods. We point out some connections between these methods and earlier work on ``pure penalty'' smoothing methods for complementarity; this connection leads to a new augmented Lagrangian based on the ``neural network'' smoothing function. Finally, we computationally compare this augmented Lagrangian to the already-known logarithmic-quadratic variant on the MCPLIB problem library, and show that the neural approach offers some advantages.

Keywords: Proximal algorithms, augmented Lagrangians, complementarity

Category 1: Nonlinear Optimization (Other )

Category 2: Convex and Nonsmooth Optimization (Convex Optimization )

Citation: Report RRR 29-2003, RUTCOR, 640 Bartholomew Road, Rutgers University, Piscataway NJ 08854 USA, August 2003

Download: [Postscript]

Entry Submitted: 08/08/2003
Entry Accepted: 08/08/2003
Entry Last Modified: 08/08/2003

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society