Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary

In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality conditions for this problem that reduces to classical ones when the derivative on the boundary is available. For this type of problems, existing necessary conditions often rely on the notion of subdifferential or become non-trivially weaker than the KKT condition in the (twice-)differentiable counterpart problems. In contrast, this paper presents a new set of first- and second-order necessary conditions that are derived without the use of subdifferential and reduces to exactly the KKT condition when (twice-)differentiability holds. As a result, these conditions are stronger than some existing ones considered for the discussed minimization problem when only non-negativity constraints are present. To solve for these optimality conditions in the special but important case of linearly constrained problems, we present two novel interior trust-region point algorithms and show that their worst-case computational efficiency in achieving the potentially stronger optimality conditions match the best known complexity bounds. Since this work considers a more general problem than the literature, our results also indicate that best known complexity bounds hold for a wider class of nonlinear programming problems.

Article

Download

View Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary