Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks

The paper observes a similarity between the stochastic optimal control of discrete dynamical systems and the learning multilayer neural networks. It focuses on contemporary deep networks with nonconvex nonsmooth loss and activation functions. The machine learning problems are treated as nonconvex nonsmooth stochastic optimization problems. As a model of nonsmooth nonconvex dependences, the so-called generalized differentiable functions are used. The backpropagation method for calculating stochastic generalized gradients of the learning quality functional for such systems is substantiated basing on Hamilton-Pontryagin formalism. Stochastic generalized gradient learning algorithms are extended for training nonconvex nonsmooth neural networks. The performance of a stochastic generalized gradient algorithm is illustrated in the linear multiclass classification problem.

Citation

Preprint 30.09.2019. V.M.Glushkov Institute of Cybernetics of the National Academy of Sciences of Ukraine, Kyiv, September 2019. To appear in "Cybernetics and Systems Analysis".

Article

Download

View Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks