Sensor-based Human Activity Recognition (HAR) has been a hot topic in pervasive computing for many years, as an enabling technology for several context-aware applications. However, the deployment of HAR in real-world scenarios is limited by some major challenges. Among those issues, privacy is particularly relevant, since activity patterns may reveal sensitive information about the users (e.g., personal habits, medical conditions). HAR solutions based on Federated Learning (FL) have been recently proposed to mitigate this problem. In FL, each user shares with a cloud server only the parameters of a locally trained model, while personal data are kept private. The cloud server is in charge of building a global model by aggregating the received parameters. Even though FL avoids the release of labelled sensor data, researchers have found that the parameters of deep learning models may still reveal sensitive information through specifically designed attacks. In this paper, we propose a first contribution in this line of research by introducing a novel framework to quantitatively evaluate the effectiveness of the Membership Inference Attack (MIA) for FL-based HAR. Our preliminary results on a public HAR dataset show how the global activity model may actually reveal sensitive information about the participating users and provide hints for future work on countering such attacks.

Preliminary Results on Sensitive Data Leakage in Federated Human Activity Recognition / R. Presotto, G. Civitarese, C. Bettini - In: 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)[s.l] : IEEE, 2022 May. - ISBN 978-1-6654-1647-4. - pp. 304-309 (( convegno Context and Activity Modeling and Recognition (CoMoRea) tenutosi a Pisa nel 2022 [10.1109/PerComWorkshops53856.2022.9767215].

Preliminary Results on Sensitive Data Leakage in Federated Human Activity Recognition

R. Presotto;G. Civitarese
;
C. Bettini
2022

Abstract

Sensor-based Human Activity Recognition (HAR) has been a hot topic in pervasive computing for many years, as an enabling technology for several context-aware applications. However, the deployment of HAR in real-world scenarios is limited by some major challenges. Among those issues, privacy is particularly relevant, since activity patterns may reveal sensitive information about the users (e.g., personal habits, medical conditions). HAR solutions based on Federated Learning (FL) have been recently proposed to mitigate this problem. In FL, each user shares with a cloud server only the parameters of a locally trained model, while personal data are kept private. The cloud server is in charge of building a global model by aggregating the received parameters. Even though FL avoids the release of labelled sensor data, researchers have found that the parameters of deep learning models may still reveal sensitive information through specifically designed attacks. In this paper, we propose a first contribution in this line of research by introducing a novel framework to quantitatively evaluate the effectiveness of the Membership Inference Attack (MIA) for FL-based HAR. Our preliminary results on a public HAR dataset show how the global activity model may actually reveal sensitive information about the participating users and provide hints for future work on countering such attacks.
Settore INF/01 - Informatica
mag-2022
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
Preliminary_Results_on_Sensitive_Data_Leakage_in_Federated_Human_Activity_Recognition (1).pdf

accesso riservato

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 619.99 kB
Formato Adobe PDF
619.99 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Preliminary_Results_on_Sensitive_Data_Leakage_in_Federated_Human_Activity_Recognition.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 1.34 MB
Formato Adobe PDF
1.34 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/926924
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact