We study networks of communicating learning agents that cooperate to solve a common nonstochastic bandit problem. Agents use an underlying communication network to get messages about actions selected by other agents, and drop messages that took more than dd hops to arrive, where dd is a delay parameter. We introduce Exp3-Coop, a cooperative version of the Exp3 algorithm and prove that with KK actions and NN agents the average per-agent regret after TT rounds is at most of order (d+1+KNα≤d)(TlnK)√(d+1+KNα≤d)(Tln⁡K), where α≤dα≤d is the independence number of the dd-th power of the communication graph GG. We then show that for any connected graph, for d=K−√d=K the regret bound is K1/4T√K1/4T, strictly better than the minimax regret KT−√KT for noncooperating agents. More informed choices of dd lead to bounds which are arbitrarily close to the full information minimax regret TlnK−√Tln⁡K when GG is dense. When GG has sparse components, we show that a variant of Exp3-Coop, allowing agents to choose their parameters according to their centrality in GG, strictly improves the regret. Finally, as a by-product of our analysis, we provide the first characterization of the minimax regret for bandit learning with delay.

Delay and cooperation in nonstochastic bandits / N. Cesa-Bianchi, C. Gentile, Y. Mansour, A. Minora. - In: JOURNAL OF MACHINE LEARNING RESEARCH. - ISSN 1532-4435. - 49:6(2016), pp. 605-622. ((Intervento presentato al 29. convegno Conference on Learning Theory tenutosi a New York nel 2016.

Delay and cooperation in nonstochastic bandits

N. Cesa-Bianchi
Primo
;
2016

Abstract

We study networks of communicating learning agents that cooperate to solve a common nonstochastic bandit problem. Agents use an underlying communication network to get messages about actions selected by other agents, and drop messages that took more than dd hops to arrive, where dd is a delay parameter. We introduce Exp3-Coop, a cooperative version of the Exp3 algorithm and prove that with KK actions and NN agents the average per-agent regret after TT rounds is at most of order (d+1+KNα≤d)(TlnK)√(d+1+KNα≤d)(Tln⁡K), where α≤dα≤d is the independence number of the dd-th power of the communication graph GG. We then show that for any connected graph, for d=K−√d=K the regret bound is K1/4T√K1/4T, strictly better than the minimax regret KT−√KT for noncooperating agents. More informed choices of dd lead to bounds which are arbitrarily close to the full information minimax regret TlnK−√Tln⁡K when GG is dense. When GG has sparse components, we show that a variant of Exp3-Coop, allowing agents to choose their parameters according to their centrality in GG, strictly improves the regret. Finally, as a by-product of our analysis, we provide the first characterization of the minimax regret for bandit learning with delay.
Settore INF/01 - Informatica
2016
http://jmlr.org/proceedings/papers/v49/cesa-bianchi16.html
Article (author)
File in questo prodotto:
File Dimensione Formato  
cesa-bianchi16.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 386.89 kB
Formato Adobe PDF
386.89 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/423453
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 43
  • ???jsp.display-item.citation.isi??? ND
social impact