We provide some theoretical results on sample complexity of PAC learning when the hypotheses are given by subsymbolical devices such as neural networks. In this framework we give new foundations to the notion of degrees of freedom of a statistic and relate it to the complexity of a concept class. Thus, for a given concept class and a given sample size, we discuss the efficiency of subsymbolical learning algorithms in terms of degrees of freedom of the computed statistic. In this setting we appraise the sample complexity overhead coming from relying on approximate hypotheses and display an increase in the degrees of freedom yield by embedding available formal knowledge into the algorithm. For known sample distribution, these quantities are related to the learning approximation goal and a special production prize is shown. Finally, we prove that testing the approximation capability of a neural network generally demands smaller sample size than training it.

Gaining degrees of freedom in subsymbolic learning / B. Apolloni, D. Malchiodi. - In: THEORETICAL COMPUTER SCIENCE. - ISSN 0304-3975. - 255:1-2(2001 Mar), pp. 295-321.

Gaining degrees of freedom in subsymbolic learning

B. Apolloni
Primo
;
D. Malchiodi
Ultimo
2001

Abstract

We provide some theoretical results on sample complexity of PAC learning when the hypotheses are given by subsymbolical devices such as neural networks. In this framework we give new foundations to the notion of degrees of freedom of a statistic and relate it to the complexity of a concept class. Thus, for a given concept class and a given sample size, we discuss the efficiency of subsymbolical learning algorithms in terms of degrees of freedom of the computed statistic. In this setting we appraise the sample complexity overhead coming from relying on approximate hypotheses and display an increase in the degrees of freedom yield by embedding available formal knowledge into the algorithm. For known sample distribution, these quantities are related to the learning approximation goal and a special production prize is shown. Finally, we prove that testing the approximation capability of a neural network generally demands smaller sample size than training it.
Computational learning ; sentry functions ; nested concept classes ; approximate learning ; neural networks
Settore INF/01 - Informatica
mar-2001
Article (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/160369
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 9
social impact