The fifth generation (5G) of mobile network offers a remarkable degree of flexibility to mobile operators, enabling them to provide users with effective and tailored network services. Software Defined Networking (SDN), Network Function Virtualization (NFV), and edge computing have given the operator the opportunity to easily bring computational capacity to the edge and to support latency-sensitive services. While 5G standards have defined the technological and architectural frameworks to orchestrate services, finding effective resources management and QoS optimization policies is still an open research issue. In this paper, we propose an online orchestration methodology for a multi-user edge service. The orchestrator goal is to simultaneously maximize the QoS, and minimize the amount of resources needed. We provide a mathematical formulation to compute an optimal offline policy and derive an online approach based on a model-free Deep Reinforcement Learning (DRL) framework. As a novel feature, the DRL agent action is modeled as a parametric combinatorial problem. A tailored multi-objective reward function leads the agent towards an effective choice of parameters for such a model. Our models are built, trained and fine-tuned by exploiting real data. Extensive simulations in diverse scenarios show that our DRL online approach produces solutions with small gaps to the optimal offline ones, enabling the operator to both save resources and grant the users an adequate QoS level.

Multi-user edge service orchestration based on Deep Reinforcement Learning / C. Quadri, A. Ceselli, G.P. Rossi. - In: COMPUTER COMMUNICATIONS. - ISSN 0140-3664. - 203:(2023 Mar 06), pp. 30-47. [10.1016/j.comcom.2023.02.027]

Multi-user edge service orchestration based on Deep Reinforcement Learning

C. Quadri
Primo
;
A. Ceselli
Secondo
;
G.P. Rossi
Ultimo
2023

Abstract

The fifth generation (5G) of mobile network offers a remarkable degree of flexibility to mobile operators, enabling them to provide users with effective and tailored network services. Software Defined Networking (SDN), Network Function Virtualization (NFV), and edge computing have given the operator the opportunity to easily bring computational capacity to the edge and to support latency-sensitive services. While 5G standards have defined the technological and architectural frameworks to orchestrate services, finding effective resources management and QoS optimization policies is still an open research issue. In this paper, we propose an online orchestration methodology for a multi-user edge service. The orchestrator goal is to simultaneously maximize the QoS, and minimize the amount of resources needed. We provide a mathematical formulation to compute an optimal offline policy and derive an online approach based on a model-free Deep Reinforcement Learning (DRL) framework. As a novel feature, the DRL agent action is modeled as a parametric combinatorial problem. A tailored multi-objective reward function leads the agent towards an effective choice of parameters for such a model. Our models are built, trained and fine-tuned by exploiting real data. Extensive simulations in diverse scenarios show that our DRL online approach produces solutions with small gaps to the optimal offline ones, enabling the operator to both save resources and grant the users an adequate QoS level.
DQN; Edge computing; Optimization
Settore INF/01 - Informatica
6-mar-2023
Article (author)
File in questo prodotto:
File Dimensione Formato  
dqn_moba_preprint.pdf

accesso aperto

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 1.64 MB
Formato Adobe PDF
1.64 MB Adobe PDF Visualizza/Apri
1-s2.0-S0140366423000737-main.pdf

accesso riservato

Descrizione: Article
Tipologia: Publisher's version/PDF
Dimensione 2.86 MB
Formato Adobe PDF
2.86 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/956981
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact