AI is revolutionizing our society promising unmatched efficiency and effectiveness in numerous tasks. It is already exhibiting remarkable performance in several fields, from smartphones’ cameras to smart grids, from finance to medicine, to name but a few. Given the increasing reliance of applications, services, and infrastructures on AI models, it is fundamental to protect these models from malicious adversaries. On the one hand, AI models are black boxes whose behavior is unclear and depends on training data. On the other hand, an adversary can render an AI model unusable with just a few specially crafted inputs, driving the model’s predictions according to her desires. This threat is especially relevant to collaborative protocols for AI models training and inference. These protocols may involve participants whose trustworthiness is uncertain, raising concerns about insider attacks to data, parameters, and models. These attacks ultimately endanger humans, as AI models power smart services in real life (AI-based IoT). A key need emerges: ensuring that AI models and, more generally, AI-based systems trained and operating in a low-trust environment can guarantee a given set of non-functional requirements, including cybersecurity-related ones. Our paper targets this need, focusing on collaborative drone swarm missions in hostile environments. We propose a methodology that supports trustworthy data circulation and AI training among different, possibly untrusted, organizations involved in collaborative drone swarm missions. This methodology aims to strengthen collaborative training, possibly built on incremental and federated learning.

Trusting Data Updates to Drone-Based Model Evolution / M. Anisetti, C.A. Ardagna, N. Bena, E. Damiani, C.Y. Yeun, S. Yoon - In: Proceedings of 1st GENZERO Workshop / [a cura di] M. Andreoni, S. Thakkar. - Prima edizione. - [s.l] : Springer, 2025. - ISBN 9789819510498. - pp. 81-89 (( convegno GENZERO24 tenutosi a Abu Dhabi nel 2024 [10.1007/978-981-95-1050-4_10].

Trusting Data Updates to Drone-Based Model Evolution

M. Anisetti;C.A. Ardagna;N. Bena;E. Damiani;
2025

Abstract

AI is revolutionizing our society promising unmatched efficiency and effectiveness in numerous tasks. It is already exhibiting remarkable performance in several fields, from smartphones’ cameras to smart grids, from finance to medicine, to name but a few. Given the increasing reliance of applications, services, and infrastructures on AI models, it is fundamental to protect these models from malicious adversaries. On the one hand, AI models are black boxes whose behavior is unclear and depends on training data. On the other hand, an adversary can render an AI model unusable with just a few specially crafted inputs, driving the model’s predictions according to her desires. This threat is especially relevant to collaborative protocols for AI models training and inference. These protocols may involve participants whose trustworthiness is uncertain, raising concerns about insider attacks to data, parameters, and models. These attacks ultimately endanger humans, as AI models power smart services in real life (AI-based IoT). A key need emerges: ensuring that AI models and, more generally, AI-based systems trained and operating in a low-trust environment can guarantee a given set of non-functional requirements, including cybersecurity-related ones. Our paper targets this need, focusing on collaborative drone swarm missions in hostile environments. We propose a methodology that supports trustworthy data circulation and AI training among different, possibly untrusted, organizations involved in collaborative drone swarm missions. This methodology aims to strengthen collaborative training, possibly built on incremental and federated learning.
English
Drone Swarm; Federated Learning; Incremental Learning; Trust
Settore INFO-01/A - Informatica
Intervento a convegno
Esperti anonimi
Ricerca di base
Pubblicazione scientifica
   MUSA - Multilayered Urban Sustainability Actiona
   MUSA
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA

   SEcurity and RIghts in the CyberSpace (SERICS)
   SERICS
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   codice identificativo PE00000014
Proceedings of 1st GENZERO Workshop
M. Andreoni, S. Thakkar
Prima edizione
Springer
2025
81
89
9
9789819510498
9789819510504
Volume a diffusione internazionale
Diamond
GENZERO24
Abu Dhabi
2024
Convegno internazionale
crossref
Aderisco
M. Anisetti, C.A. Ardagna, N. Bena, E. Damiani, C.Y. Yeun, S. Yoon
Book Part (author)
open
273
Trusting Data Updates to Drone-Based Model Evolution / M. Anisetti, C.A. Ardagna, N. Bena, E. Damiani, C.Y. Yeun, S. Yoon - In: Proceedings of 1st GENZERO Workshop / [a cura di] M. Andreoni, S. Thakkar. - Prima edizione. - [s.l] : Springer, 2025. - ISBN 9789819510498. - pp. 81-89 (( convegno GENZERO24 tenutosi a Abu Dhabi nel 2024 [10.1007/978-981-95-1050-4_10].
info:eu-repo/semantics/bookPart
6
Prodotti della ricerca::03 - Contributo in volume
File in questo prodotto:
File Dimensione Formato  
AABDYY.GENZERO2024.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 397.25 kB
Formato Adobe PDF
397.25 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1190015
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact