Supervised Deep Learning (DL) models are currently the leading approach for sensor-based Human Activity Recognition (HAR) on wearable and mobile devices. However, training them requires large amounts of labeled data, whose collection is often time-consuming, expensive, and error-prone. At the same time, due to the intra- and inter-variability of activity execution, activity models should be person- alized for each user. In this work, we propose SelfAct: a novel framework for HAR that combines self-supervised and active learning to mitigate these problems. SelfAct leverages a large pool of unlabeled data collected from many users to pre-train through self-supervision a DL model, with the goal of learning a meaningful and efficient latent representation of sensor data. The resulting pre-trained model can be locally used by new users, which will fine-tune it thanks to a novel unsupervised active learn- ing strategy. Our experiments on two publicly available HAR datasets demonstrate that SelfAct achieves results that are close to or even better than those reached by fully supervised approaches with only a few active learning queries.

SelfAct: Personalized Activity Recognition Based on Self-Supervised and Active Learning / L. Arrotta, G. Civitarese, C. Bettini (LECTURE NOTES OF THE INSTITUTE FOR COMPUTER SCIENCES, SOCIAL INFORMATICS AND TELECOMMUNICATIONS ENGINEERING). - In: Mobile and Ubiquitous Systems: Computing, Networking and Services / [a cura di] A. Zaslavsky, Z. Ning, V. Kalogeraki, D. Georgakopoulos, P.K. Chrysanthis. - [s.l] : Springer, 2024. - ISBN 9783031639883. - pp. 375-391 (( Intervento presentato al 20. convegno EAI International Conference tenutosi a Melbourne nel 2023 [10.1007/978-3-031-63989-0_19].

SelfAct: Personalized Activity Recognition Based on Self-Supervised and Active Learning

L. Arrotta;G. Civitarese
Secondo
;
C. Bettini
Ultimo
2024

Abstract

Supervised Deep Learning (DL) models are currently the leading approach for sensor-based Human Activity Recognition (HAR) on wearable and mobile devices. However, training them requires large amounts of labeled data, whose collection is often time-consuming, expensive, and error-prone. At the same time, due to the intra- and inter-variability of activity execution, activity models should be person- alized for each user. In this work, we propose SelfAct: a novel framework for HAR that combines self-supervised and active learning to mitigate these problems. SelfAct leverages a large pool of unlabeled data collected from many users to pre-train through self-supervision a DL model, with the goal of learning a meaningful and efficient latent representation of sensor data. The resulting pre-trained model can be locally used by new users, which will fine-tune it thanks to a novel unsupervised active learn- ing strategy. Our experiments on two publicly available HAR datasets demonstrate that SelfAct achieves results that are close to or even better than those reached by fully supervised approaches with only a few active learning queries.
Human Activity Recognition; Self-supervised Learning; Active Learning
Settore INF/01 - Informatica
2024
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
Self_supervised_and_Active_Learning_for_HAR.pdf

accesso riservato

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 1.32 MB
Formato Adobe PDF
1.32 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-031-63989-0_19.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 1.65 MB
Formato Adobe PDF
1.65 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1079911
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact