We provide a new analysis of an efficient margin-based algorithm for selective sampling in classification problems. Using the so-called Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised and the selective sampling versions of the basic algorithm. Our analysis reveals that, excluding logarithmic factors, the average risk of the selective sampler converges to the Bayes risk at rate N^{-(1+\alpha)(2+\alpha)/2(3+\alpha)} where N denotes the number of queried labels, and \alpha > 0 is the exponent in the low noise condition. For all \alpha > \sqrt{3} - 1 \approx 0.73 this convergence rate is asymptotically faster than the rate N^{-(1+\alpha)/(2+\alpha)} achieved by the fully supervised version of the same classifier, which queries all labels, and for \alpha\to\infty the two rates exhibit an exponential gap. Experiments on textual data reveal that simple variants of the proposed selective sampler perform much better than popular and similarly efficient competitors.

Linear Classification and Selective Sampling Under Low Noise Conditions / G. G. CAVALLANTI, N.A. CESA BIANCHI, C. GENTILE - In: Advances in Neural Information Processing Systems 21 : 22nd Annual Conference on Neural Information Processing Systems 2008 / [a cura di] D. Koller. - Red Hook, NY : Curran Associates Inc., 2009. - ISBN 9781605609492. - pp. 249-256 (( Intervento presentato al 22. convegno Advances in Neural Information Processing Systems 21 tenutosi a Vancouver, Canada nel 2008.

Linear Classification and Selective Sampling Under Low Noise Conditions

G.G. Cavallanti
Primo
;
N.A. CESA BIANCHI
Secondo
;
2009

Abstract

We provide a new analysis of an efficient margin-based algorithm for selective sampling in classification problems. Using the so-called Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised and the selective sampling versions of the basic algorithm. Our analysis reveals that, excluding logarithmic factors, the average risk of the selective sampler converges to the Bayes risk at rate N^{-(1+\alpha)(2+\alpha)/2(3+\alpha)} where N denotes the number of queried labels, and \alpha > 0 is the exponent in the low noise condition. For all \alpha > \sqrt{3} - 1 \approx 0.73 this convergence rate is asymptotically faster than the rate N^{-(1+\alpha)/(2+\alpha)} achieved by the fully supervised version of the same classifier, which queries all labels, and for \alpha\to\infty the two rates exhibit an exponential gap. Experiments on textual data reveal that simple variants of the proposed selective sampler perform much better than popular and similarly efficient competitors.
Active learning, Selective sampling, Linear classification, Low noise
Settore INF/01 - Informatica
2009
http://books.nips.cc/papers/files/nips21/NIPS2008_0163.pdf
Book Part (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/67170
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? ND
social impact