The majority of the approaches to sensor-based activity recognition are based on supervised machine learning. While these methods reach high recognition rates, a major challenge is to understand the rationale behind the predictions of the classifier. Indeed, those predictions may have a relevant impact on the follow-up actions taken in a smart living environment. We propose a novel approach for eXplainable Activity Recognition (XAR) based on interpretable machine learning models. We generate explanations by combining the feature values with the feature importance obtained from the underlying trained classifier. A quantitative evaluation on a real dataset of ADLs shows that our method is effective in providing explanations consistent with common knowledge. By comparing two popular ML models, our results also show that one versus one classifiers can provide better explanations in our framework.
Explainable Activity Recognition over Interpretable Models / C. Bettini, G. Civitarese, M. Fiori - In: 2021 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)[s.l] : IEEE, 2021. - ISBN 9781665404242. - pp. 32-37 (( convegno CoMoRea tenutosi a Kassel nel 2021 [10.1109/PerComWorkshops51409.2021.9430955].
Explainable Activity Recognition over Interpretable Models
C. Bettini;G. Civitarese
;M. Fiori
2021
Abstract
The majority of the approaches to sensor-based activity recognition are based on supervised machine learning. While these methods reach high recognition rates, a major challenge is to understand the rationale behind the predictions of the classifier. Indeed, those predictions may have a relevant impact on the follow-up actions taken in a smart living environment. We propose a novel approach for eXplainable Activity Recognition (XAR) based on interpretable machine learning models. We generate explanations by combining the feature values with the feature importance obtained from the underlying trained classifier. A quantitative evaluation on a real dataset of ADLs shows that our method is effective in providing explanations consistent with common knowledge. By comparing two popular ML models, our results also show that one versus one classifiers can provide better explanations in our framework.File | Dimensione | Formato | |
---|---|---|---|
21-CoMoRea.pdf
accesso riservato
Tipologia:
Pre-print (manoscritto inviato all'editore)
Dimensione
464.9 kB
Formato
Adobe PDF
|
464.9 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
09430955.pdf
accesso riservato
Tipologia:
Publisher's version/PDF
Dimensione
1.08 MB
Formato
Adobe PDF
|
1.08 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.