We introduce a new paradigm of neural networks where neurons autonomously search for the best reciprocal position in a topological space so as to exchange information more profitably. The idea that elementary processors move within a network to get a proper position is borne out by biological neurons in brain morphogenesis. The basic rule we state for this dynamics is that a neuron is attracted by the mates which are most informative and repelled by ones which aremost similar to it. By embedding this rule into a Newtonian dynamics, we obtain a network which autonomously organizes its layout. Thanks to this further adaptation, the network proves to be robustly trainable through an extended version of the back- propagation algorithm even in the case of deep architectures. We test this network on two classic benchmarks and thereby get many insights on how the network behaves, and when and why it succeeds.

Training a network of mobile neurons / B. Apolloni, S. Bassis, L. Valerio - In: The 2011 International joint conference on neural networks (IJCNN) : IJCNN 2011 conference proceedings : July 31–August 5, 2011 Doubletree Hotel, San Jose, California, USAPiscataway (New Jersey) : IEEE, 2011. - ISBN 9781424496358. - pp. 1683-1691 (( convegno International joint conference on neural networks (IJCNN) tenutosi a San Jose (California) nel 2011 [10.1109/IJCNN.2011.6033427].

Training a network of mobile neurons

B. Apolloni;S. Bassis;L. Valerio
2011

Abstract

We introduce a new paradigm of neural networks where neurons autonomously search for the best reciprocal position in a topological space so as to exchange information more profitably. The idea that elementary processors move within a network to get a proper position is borne out by biological neurons in brain morphogenesis. The basic rule we state for this dynamics is that a neuron is attracted by the mates which are most informative and repelled by ones which aremost similar to it. By embedding this rule into a Newtonian dynamics, we obtain a network which autonomously organizes its layout. Thanks to this further adaptation, the network proves to be robustly trainable through an extended version of the back- propagation algorithm even in the case of deep architectures. We test this network on two classic benchmarks and thereby get many insights on how the network behaves, and when and why it succeeds.
artificial neural networks; mobile neurons; Newtonian dynamics; backpropagation algorithm; multilayer neural network; neural network training; topological space
Settore INF/01 - Informatica
2011
Book Part (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/288643
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 5
social impact