Motivated by recommendation problems in music streaming platforms, we propose a nonstationary stochastic bandit model in which the expected reward of an arm depends on the number of rounds that have passed since the arm was last pulled. After proving that finding an optimal policy is NP-hard even when all model parameters are known, we introduce a class of ranking policies provably approximating, to within a constant factor, the expected reward of the optimal policy. We show an algorithm whose regret with respect to the best ranking policy is bounded by Oe √ kT , where k is the number of arms and T is time. Our algorithm uses only O k ln ln T) switches, which helps when switching between policies is costly. As constructing the class of learning policies requires ordering the arms according to their expectations, we also bound the number of pulls required to do so. Finally, we run experiments to compare our algorithm against UCB on different problem instance.

Stochastic Bandits with Delay-Dependent Payoffs / L. Cella, N. Cesa Bianchi (PROCEEDINGS OF MACHINE LEARNING RESEARCH). - In: International Conference on Artificial Intelligence and Statistics / [a cura di] S. Chiappa, R. Calandra. - [s.l] : PMLR, 2020. - pp. 1168-1177 (( Intervento presentato al 23. convegno International Conference on Artificial Intelligence and Statistics tenutosi a online nel 2020.

Stochastic Bandits with Delay-Dependent Payoffs

L. Cella;N. Cesa Bianchi
2020

Abstract

Motivated by recommendation problems in music streaming platforms, we propose a nonstationary stochastic bandit model in which the expected reward of an arm depends on the number of rounds that have passed since the arm was last pulled. After proving that finding an optimal policy is NP-hard even when all model parameters are known, we introduce a class of ranking policies provably approximating, to within a constant factor, the expected reward of the optimal policy. We show an algorithm whose regret with respect to the best ranking policy is bounded by Oe √ kT , where k is the number of arms and T is time. Our algorithm uses only O k ln ln T) switches, which helps when switching between policies is costly. As constructing the class of learning policies requires ordering the arms according to their expectations, we also bound the number of pulls required to do so. Finally, we run experiments to compare our algorithm against UCB on different problem instance.
Settore INF/01 - Informatica
   Algorithms, Games, and Digital Markets (ALGADIMAR)
   ALGADIMAR
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2017R9FHSR_006
2020
http://proceedings.mlr.press/v108/cella20a/cella20a.pdf
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
cella20a.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 347.97 kB
Formato Adobe PDF
347.97 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/741070
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 13
social impact