Bias-variance analysis provides a tool to study learning algorithms and can be used to properly design ensemble methods well-tuned to the properties of a specific base learner. Indeed the effectiveness of ensemble methods critically depends on accuracy, diversity and learning characteristics of base learners. We present an extended experimental analysis of bias-variance decomposition of the error in Support Vector Machines (SVMs), considering Gaussian, polynomial and dot-product kernels. A characterization of the error decomposition is provided, by means of the analysis of the relationships between bias, variance, kernel type and its parameters, offering insights into the way SVMs learn. The results show that the expected trade-off between bias and variance is sometimes observed, but more complex relationships can be detected, especially in Gaussian and polynomial kernels. We show that the bias-variance decomposition offers a rationale to develop ensemble methods using SVMs as base learners, and we outline two directions for developing SVM ensembles, exploiting the SVM bias characteristics and the bias-variance dependence on the kernel parameters.

Bias-variance analysis of Support Vector Machines for the development of SVM-based ensemble methods / G. Valentini, T.G. Dietterich. - In: JOURNAL OF MACHINE LEARNING RESEARCH. - ISSN 1532-4435. - 5(2004 Jul), pp. 725-775.

Bias-variance analysis of Support Vector Machines for the development of SVM-based ensemble methods

G. Valentini
Primo
;
2004

Abstract

Bias-variance analysis provides a tool to study learning algorithms and can be used to properly design ensemble methods well-tuned to the properties of a specific base learner. Indeed the effectiveness of ensemble methods critically depends on accuracy, diversity and learning characteristics of base learners. We present an extended experimental analysis of bias-variance decomposition of the error in Support Vector Machines (SVMs), considering Gaussian, polynomial and dot-product kernels. A characterization of the error decomposition is provided, by means of the analysis of the relationships between bias, variance, kernel type and its parameters, offering insights into the way SVMs learn. The results show that the expected trade-off between bias and variance is sometimes observed, but more complex relationships can be detected, especially in Gaussian and polynomial kernels. We show that the bias-variance decomposition offers a rationale to develop ensemble methods using SVMs as base learners, and we outline two directions for developing SVM ensembles, exploiting the SVM bias characteristics and the bias-variance dependence on the kernel parameters.
Bias-variance analysis; Ensemble methods; Support vector machines
Settore INF/01 - Informatica
lug-2004
http://jmlr.csail.mit.edu/papers/volume5/valentini04a/valentini04a.pdf
Article (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/143286
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 216
  • ???jsp.display-item.citation.isi??? ND
social impact