Latency-sensitive applications, such as autonomous driving in smart cities and smart industries, require a networking and computing infrastructure to support their operations. Cloud-to-edge continuum represents a promising architecture to provide computational capability close to edge devices. However, deploying latency-sensitive applications in the continuum is challenging due to the heterogeneity and the geographical distribution of the computing nodes. In this paper, we address the deployment problem in a tele-operated autonomous driving scenario, formulating the orchestration task as a Virtual Network Function Placement Problem (VNFPP) with multi-tier performance levels, enabling vertical scaling of computational resources per microservice. Our MILP model, MORAL, minimizes node centrality-based deployment costs while satisfying resource and end-to-end latency constraints. We tested our approach through extensive simulations on realistic network topologies and synthetic applications, showing that the proposed model improves deployment feasibility, latency compliance, and resource efficiency compared to single performance tier versions and baseline strategies.

Latency-Aware Placement of Microservices in the Cloud-to-Edge Continuum via Resource Scaling / A. Bertoncini, A. Ceselli, C. Quadri (IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING). - In: 2025 IEEE International Conference on Smart Computing (SMARTCOMP)[s.l] : IEEE, 2025 Jun. - ISBN 979-8-3315-8647-8. - pp. 420-425 (( convegno International Conference on Smart Computing (SMARTCOMP) tenutosi a Cork nel 2025 [10.1109/smartcomp65954.2025.00103].

Latency-Aware Placement of Microservices in the Cloud-to-Edge Continuum via Resource Scaling

A. Bertoncini
Primo
;
A. Ceselli
Secondo
;
C. Quadri
Ultimo
2025

Abstract

Latency-sensitive applications, such as autonomous driving in smart cities and smart industries, require a networking and computing infrastructure to support their operations. Cloud-to-edge continuum represents a promising architecture to provide computational capability close to edge devices. However, deploying latency-sensitive applications in the continuum is challenging due to the heterogeneity and the geographical distribution of the computing nodes. In this paper, we address the deployment problem in a tele-operated autonomous driving scenario, formulating the orchestration task as a Virtual Network Function Placement Problem (VNFPP) with multi-tier performance levels, enabling vertical scaling of computational resources per microservice. Our MILP model, MORAL, minimizes node centrality-based deployment costs while satisfying resource and end-to-end latency constraints. We tested our approach through extensive simulations on realistic network topologies and synthetic applications, showing that the proposed model improves deployment feasibility, latency compliance, and resource efficiency compared to single performance tier versions and baseline strategies.
No
English
Service Orchestration; Cloud-to-Edge Continuum; Mathematical Optimization
Settore INFO-01/A - Informatica
Intervento a convegno
Esperti anonimi
Pubblicazione scientifica
   CAVIA: enabling the Cloud-to-Autonomous-Vehicles continuum for future Industrial Applications
   CAVIA
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   2022JAFATE_002

   SEcurity and RIghts in the CyberSpace (SERICS)
   SERICS
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   codice identificativo PE00000014
2025 IEEE International Conference on Smart Computing (SMARTCOMP)
IEEE
giu-2025
420
425
6
979-8-3315-8647-8
Volume a diffusione internazionale
International Conference on Smart Computing (SMARTCOMP)
Cork
2025
Convegno internazionale
Intervento inviato
crossref
Aderisco
A. Bertoncini, A. Ceselli, C. Quadri
Book Part (author)
partially_open
273
Latency-Aware Placement of Microservices in the Cloud-to-Edge Continuum via Resource Scaling / A. Bertoncini, A. Ceselli, C. Quadri (IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING). - In: 2025 IEEE International Conference on Smart Computing (SMARTCOMP)[s.l] : IEEE, 2025 Jun. - ISBN 979-8-3315-8647-8. - pp. 420-425 (( convegno International Conference on Smart Computing (SMARTCOMP) tenutosi a Cork nel 2025 [10.1109/smartcomp65954.2025.00103].
info:eu-repo/semantics/bookPart
3
Prodotti della ricerca::03 - Contributo in volume
File in questo prodotto:
File Dimensione Formato  
2025_CAVIA_SmartSys.pdf

accesso aperto

Tipologia: Pre-print (manoscritto inviato all'editore)
Licenza: Creative commons
Dimensione 485.5 kB
Formato Adobe PDF
485.5 kB Adobe PDF Visualizza/Apri
Latency-Aware_Placement_of_Microservices_in_the_Cloud-to-Edge_Continuum_via_Resource_Scaling.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Licenza: Nessuna licenza
Dimensione 2.4 MB
Formato Adobe PDF
2.4 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1175699
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 0
social impact