We study the power of different types of adaptive (nonoblivious) adversaries in the setting of prediction with expert advice, under both full-information and bandit feedback. We measure the player's performance using a new notion of regret, also known as policy regret, which better captures the adversary's adaptiveness to the player's behavior. In a setting where losses are allowed to drift, we characterize-in a nearly complete manner-the power of adaptive adversaries with bounded memories and switching costs. In particular, we show that with switching costs, the attainable rate with bandit feedback is ⊖∼(T2/3). Interestingly, this rate is significantly worse than the ⊖(√T) rate attainable with switching costs in the full-information case. Via a novel reduction from experts to bandits, we also show that a bounded memory adversary can force ⊖∼(T2/3) regret even in the full information case, proving that switching costs are easier to control than bounded memory adversaries. Our lower bounds rely on a new stochastic adversary strategy that generates loss processes with strong dependencies.

Online learning with switching costs and other adaptive adversaries / N. Cesa-Bianchi, O. Dekel, O. Shamir - In: Advances in neural information processing systems[s.l] : Neural information processing systems foundation, 2013. - pp. 1160-1168 (( convegno Conference on Neural Information Processing Systems tenutosi a Lake Tahoe, USA nel 2013.

Online learning with switching costs and other adaptive adversaries

N. Cesa-Bianchi;
2013

Abstract

We study the power of different types of adaptive (nonoblivious) adversaries in the setting of prediction with expert advice, under both full-information and bandit feedback. We measure the player's performance using a new notion of regret, also known as policy regret, which better captures the adversary's adaptiveness to the player's behavior. In a setting where losses are allowed to drift, we characterize-in a nearly complete manner-the power of adaptive adversaries with bounded memories and switching costs. In particular, we show that with switching costs, the attainable rate with bandit feedback is ⊖∼(T2/3). Interestingly, this rate is significantly worse than the ⊖(√T) rate attainable with switching costs in the full-information case. Via a novel reduction from experts to bandits, we also show that a bounded memory adversary can force ⊖∼(T2/3) regret even in the full information case, proving that switching costs are easier to control than bounded memory adversaries. Our lower bounds rely on a new stochastic adversary strategy that generates loss processes with strong dependencies.
Settore INF/01 - Informatica
2013
http://papers.nips.cc/paper/5151-online-learning-with-switching-costs-and-other-adaptive-adversaries.pdf
Book Part (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/231340
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 66
  • ???jsp.display-item.citation.isi??? ND
social impact