-

 

 

 




Optimization Online





 

Substantiation of the Backpropagation Technique via the Hamilton-Pontryagin Formalism for Training Nonconvex Nonsmooth Neural Networks

Vladimir I. Norkin(vladimir.norkin***at***gmail.com)

Abstract: The paper observes the similarity between the stochastic optimal control of discrete dynamical systems and the training multilayer neural networks. It focuses on contemporary deep networks with nonconvex nonsmooth loss and activation functions. In the paper, the machine learning problems are treated as nonconvex nonsmooth stochastic optimization problems. As a model of nonsmooth nonconvex dependences, the so-called generalized differentiable functions are used. A method for calculating stochastic generalized gradients of the learning quality functional for such systems is substantiated basing on Hamilton-Pontryagin formalism. This method extends a well-known “backpropagation” machine learning technique to nonconvex nonsmooth networks. Stochastic generalized gradient learning algorithms are extended for training nonconvex nonsmooth neural networks.

Keywords: machine learning, deep learning, multilayer neural networks, nonsmooth nonconvex optimization, stochastic optimization, stochastic generalized gradient.

Category 1: Nonlinear Optimization

Category 2: Convex and Nonsmooth Optimization

Category 3: Stochastic Programming

Citation: Preprint, September 2019. V.M.Glushkov Institute of Cybernetics of the National Academy of Sciences of Ukraine (NASU), Kyiv. To appear in Dopovidi NASU.

Download: [PDF]

Entry Submitted: 09/19/2019
Entry Accepted: 09/19/2019
Entry Last Modified: 09/19/2019

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Optimization Society