Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization

dc.contributor.authorSolodov, Mikhail
dc.contributor.authorMangasarian, Olvi
dc.date.accessioned2013-01-25T19:44:22Z
dc.date.available2013-01-25T19:44:22Z
dc.date.issued1994
dc.description.abstractThe fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The result presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decayen
dc.identifier.citation94-06en
dc.identifier.urihttp://digital.library.wisc.edu/1793/64530
dc.subjectbackpropagation convergenceen
dc.titleBackpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimizationen
dc.typeTechnical Reporten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
94-06.pdf
Size:
129.83 KB
Format:
Adobe Portable Document Format
Description:
Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.03 KB
Format:
Item-specific license agreed upon to submission
Description: