Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization

Loading...
Thumbnail Image

Date

Authors

Solodov, Mikhail
Mangasarian, Olvi

Advisors

License

DOI

Type

Technical Report

Journal Title

Journal ISSN

Volume Title

Publisher

Grantor

Abstract

The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The result presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay

Description

Related Material and Data

Citation

94-06

Sponsorship

Endorsement

Review

Supplemented By

Referenced By