In many real world applications, the number of examples to learn from is plentiful, but we can only obtain limited information on each individual example. We study the possibilities of efficient, provably correct, large-scale learning in such settings. The main theme we would like to establish is that large amounts of examples can compensate for the lack of full information on each individual example. The type of partial information we consider can be due to inherent noise or from constraints on the type of interaction with the data source. In particular, we describe and analyze algorithms for budgeted learning, in which the learner can only view a few attributes of each training example (Cesa-Bianchi, Shalev-Shwartz, and Shamir 2010a; 2010c), and algorithms for learning kernel-based predictors, when individual examples are corrupted by random noise (Cesa-Bianchi, Shalev-Shwartz, and Shamir 2010b).
Quantity makes quality: learning with partial views / N. Cesa-Bianchi, S. Shalev Shwartz, O. Shamir - In: Proceedings of the twenty-fifth AAAI conference on artificial intelligenceMenlo Park, CA, USA : AAAI Press, 2011. - ISBN 9781577355076. - pp. 1547-1550 (( Intervento presentato al 25th. convegno AAAI Conference on Artificial Intelligence tenutosi a San Francisco nel 2011.
Quantity makes quality: learning with partial views
N. Cesa-BianchiPrimo
;
2011
Abstract
In many real world applications, the number of examples to learn from is plentiful, but we can only obtain limited information on each individual example. We study the possibilities of efficient, provably correct, large-scale learning in such settings. The main theme we would like to establish is that large amounts of examples can compensate for the lack of full information on each individual example. The type of partial information we consider can be due to inherent noise or from constraints on the type of interaction with the data source. In particular, we describe and analyze algorithms for budgeted learning, in which the learner can only view a few attributes of each training example (Cesa-Bianchi, Shalev-Shwartz, and Shamir 2010a; 2010c), and algorithms for learning kernel-based predictors, when individual examples are corrupted by random noise (Cesa-Bianchi, Shalev-Shwartz, and Shamir 2010b).File | Dimensione | Formato | |
---|---|---|---|
2011_aaai_cesshalsham.pdf
accesso aperto
Tipologia:
Publisher's version/PDF
Dimensione
208.92 kB
Formato
Adobe PDF
|
208.92 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.