We consider the problem of asynchronous online combinatorial optimization on a network of communicating agents. At each time step, some of the agents are stochastically activated, requested to make a prediction, and the system pays the corresponding loss. Then, neighbors of active agents receive semi-bandit feedback and exchange some succinct local information. The goal is to minimize the network regret, defined as the difference between the cumulative loss of the predictions of active agents and that of the best action in hindsight, selected from a combinatorial decision set. The main challenge in such a context is to control the computational complexity of the resulting algorithm while retaining minimax optimal regret guarantees. We introduce Coop-FTPL, a cooperative version of the well-known Follow The Perturbed Leader algorithm, that implements a new loss estimation procedure generalizing the Geometric Resampling of Neu and Bart ́ok (2013) to our setting. Assuming that the elements of the decision set are k-dimensional binary vectors with at most m non-zero entries and α1 is the independence number of the network, we show that the expected regret of our algorithm after T time steps is of order Q√mkT log(k)(kα1/Q + m), where Q is the total activation probability mass. Furthermore, we prove that this is only √k log k- away from the best achievable rate and that Coop-FTPL has a state-of-the-art T 3/2 worst-case computational complexity.
An Efficient Algorithm for Cooperative Semi-Bandits / R. Della Vecchia, T.R. Cesari (PROCEEDINGS OF MACHINE LEARNING RESEARCH). - In: Algorithmic Learning Theory / [a cura di] V. Feldman, K. Ligett, S. Sabato. - [s.l] : Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan, 2021. - pp. 1-24 (( convegno Algorithmic Learning Theory tenutosi a Virtual event nel 2021.
An Efficient Algorithm for Cooperative Semi-Bandits
T.R. Cesari
2021
Abstract
We consider the problem of asynchronous online combinatorial optimization on a network of communicating agents. At each time step, some of the agents are stochastically activated, requested to make a prediction, and the system pays the corresponding loss. Then, neighbors of active agents receive semi-bandit feedback and exchange some succinct local information. The goal is to minimize the network regret, defined as the difference between the cumulative loss of the predictions of active agents and that of the best action in hindsight, selected from a combinatorial decision set. The main challenge in such a context is to control the computational complexity of the resulting algorithm while retaining minimax optimal regret guarantees. We introduce Coop-FTPL, a cooperative version of the well-known Follow The Perturbed Leader algorithm, that implements a new loss estimation procedure generalizing the Geometric Resampling of Neu and Bart ́ok (2013) to our setting. Assuming that the elements of the decision set are k-dimensional binary vectors with at most m non-zero entries and α1 is the independence number of the network, we show that the expected regret of our algorithm after T time steps is of order Q√mkT log(k)(kα1/Q + m), where Q is the total activation probability mass. Furthermore, we prove that this is only √k log k- away from the best achievable rate and that Coop-FTPL has a state-of-the-art T 3/2 worst-case computational complexity.File | Dimensione | Formato | |
---|---|---|---|
della-vecchia21a.pdf
accesso aperto
Tipologia:
Publisher's version/PDF
Dimensione
326.72 kB
Formato
Adobe PDF
|
326.72 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.