Looking at the training stage of error-backpropagation algorithm as an optimization problem, in this paper we check two ways of embedding simulated annealing to improve the usual gradient descent method for the achievement of good minima of the error function. The first way refers to a continuous state multilayer perceptron (MLP) and it stands for a random selection of the descent direction around the steepest one. The second way concerns binary states MLP where a backpropagation of the right answer from output to input is realized through a Boltzmann machine.

Simulated annealing approach in back propagation / S. AMATO, B. APOLLONI, G. CAPORALI, U. MADESANI, A.M. ZANABONI. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 3:5-6(1991), pp. 207-220. [10.1016/0925-2312(91)90003-T]

Simulated annealing approach in back propagation

B. Apolloni;A.M. Zanaboni
1991

Abstract

Looking at the training stage of error-backpropagation algorithm as an optimization problem, in this paper we check two ways of embedding simulated annealing to improve the usual gradient descent method for the achievement of good minima of the error function. The first way refers to a continuous state multilayer perceptron (MLP) and it stands for a random selection of the descent direction around the steepest one. The second way concerns binary states MLP where a backpropagation of the right answer from output to input is realized through a Boltzmann machine.
Boltzmann machine; error backpropagation; learning; simulated annealing
1991
Article (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/50878
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? ND
social impact