Latency-sensitive applications, such as autonomous driving in smart cities and smart industries, require a networking and computing infrastructure to support their operations. Cloud-to-edge continuum represents a promising architecture to provide computational capability close to edge devices. However, deploying latency-sensitive applications in the continuum is challenging due to the heterogeneity and the geographical distribution of the computing nodes. In this paper, we address the deployment problem in a tele-operated autonomous driving scenario, formulating the orchestration task as a Virtual Network Function Placement Problem (VNFPP) with multi-tier performance levels, enabling vertical scaling of computational resources per microservice. Our MILP model, MORAL, minimizes node centrality-based deployment costs while satisfying resource and end-to-end latency constraints. We tested our approach through extensive simulations on realistic network topologies and synthetic applications, showing that the proposed model improves deployment feasibility, latency compliance, and resource efficiency compared to single performance tier versions and baseline strategies.
Latency-Aware Placement of Microservices in the Cloud-to-Edge Continuum via Resource Scaling / A. Bertoncini, A. Ceselli, C. Quadri (IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING). - In: 2025 IEEE International Conference on Smart Computing (SMARTCOMP)[s.l] : IEEE, 2025 Jun. - ISBN 979-8-3315-8647-8. - pp. 420-425 (( convegno International Conference on Smart Computing (SMARTCOMP) tenutosi a Cork nel 2025 [10.1109/smartcomp65954.2025.00103].
Latency-Aware Placement of Microservices in the Cloud-to-Edge Continuum via Resource Scaling
A. Bertoncini
Primo
;A. CeselliSecondo
;C. QuadriUltimo
2025
Abstract
Latency-sensitive applications, such as autonomous driving in smart cities and smart industries, require a networking and computing infrastructure to support their operations. Cloud-to-edge continuum represents a promising architecture to provide computational capability close to edge devices. However, deploying latency-sensitive applications in the continuum is challenging due to the heterogeneity and the geographical distribution of the computing nodes. In this paper, we address the deployment problem in a tele-operated autonomous driving scenario, formulating the orchestration task as a Virtual Network Function Placement Problem (VNFPP) with multi-tier performance levels, enabling vertical scaling of computational resources per microservice. Our MILP model, MORAL, minimizes node centrality-based deployment costs while satisfying resource and end-to-end latency constraints. We tested our approach through extensive simulations on realistic network topologies and synthetic applications, showing that the proposed model improves deployment feasibility, latency compliance, and resource efficiency compared to single performance tier versions and baseline strategies.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_CAVIA_SmartSys.pdf
accesso aperto
Tipologia:
Pre-print (manoscritto inviato all'editore)
Licenza:
Creative commons
Dimensione
485.5 kB
Formato
Adobe PDF
|
485.5 kB | Adobe PDF | Visualizza/Apri |
|
Latency-Aware_Placement_of_Microservices_in_the_Cloud-to-Edge_Continuum_via_Resource_Scaling.pdf
accesso riservato
Tipologia:
Publisher's version/PDF
Licenza:
Nessuna licenza
Dimensione
2.4 MB
Formato
Adobe PDF
|
2.4 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.




