Globally convergent Newton-type methods for multiobjective optimization

We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas  the second uses  second-order information of the objective functions with ingredients of the steepest descent method.  One of the key points of our approaches  is to impose some safeguard strategies  on the search directions. These strategies are associated to the conditions that prevent, at each iteration, the search direction to be too close to orthogonality with the multiobjective steepest descent direction and require a proportionality between  the lengths  of such directions. In order to fulfill the demanded safeguard conditions  on the search directions of Newton-type methods,  we adopt the technique in which the Hessians are modified, if necessary, by adding multiples of the identity. For our first Newton-type method,  it is also shown that, under convexity assumptions,  the local superlinear rate of convergence (or quadratic, in the case where the Hessians of the objectives are Lipschitz continuous) to a local efficient point of the given problem is recovered. The global convergences of the aforementioned methods are based, first,  on  presenting and establishing the global convergence of a general algorithm and, then, showing that the new methods fall in this general algorithm. Numerical experiments illustrating the practical advantages  of the proposed Newton-type schemes are presented.

Citation

M. L. N. Gonçalves, F. S. Lima, and L. F. Prudente, Globally convergent Newton-type methods for multiobjective optimization, Federal University of Goias, 2020.

Article

Download

View Globally convergent Newton-type methods for multiobjective optimization