-

 

 

 




Optimization Online





 

Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima

Vladislav B. Tadic(v.b.tadic***at***bristol.ac.uk)

Abstract: The convergence rate of stochastic gradient search is analyzed in this paper. Using arguments based on differential geometry and Lojasiewicz inequalities, tight bounds on the convergence rate of general stochastic gradient algorithms are derived. As opposed to the existing results, the results presented in this paper allow the objective function to have multiple, non-isolated minima, impose no restriction on the values of the Hessian (of the objective function) and do not require the algorithm estimates to have a single limit point. Applying these new results, the convergence rate of recursive prediction error identification algorithms is studied. The convergence rate of supervised and temporal-difference learning algorithms is also analyzed using the results derived in the paper.

Keywords: Stochastic gradient algorithms, rate of convergence, Lojasiewicz inequalities, system identification, recursive prediction error, ARMA models, machine learning, supervised learning, temporal-difference learning.

Category 1: Stochastic Programming

Category 2: Applications -- Science and Engineering (Control Applications )

Category 3: Applications -- Science and Engineering (Statistics )

Citation:

Download: [PDF]

Entry Submitted: 04/27/2009
Entry Accepted: 04/27/2009
Entry Last Modified: 04/27/2009

Modify/Update this entry


  Visitors Authors More about us Links
  Subscribe, Unsubscribe
Digest Archive
Search, Browse the Repository

 

Submit
Update
Policies
Coordinator's Board
Classification Scheme
Credits
Give us feedback
Optimization Journals, Sites, Societies
Mathematical Programming Society