The majority of the approaches to sensor-based activity recognition are based on supervised machine learning. While these methods reach high recognition rates, a major challenge is to understand the rationale behind the predictions of the classifier. Indeed, those predictions may have a relevant impact on the follow-up actions taken in a smart living environment. We propose a novel approach for eXplainable Activity Recognition (XAR) based on interpretable machine learning models. We generate explanations by combining the feature values with the feature importance obtained from the underlying trained classifier. A quantitative evaluation on a real dataset of ADLs shows that our method is effective in providing explanations consistent with common knowledge. By comparing two popular ML models, our results also show that one versus one classifiers can provide better explanations in our framework.

Explainable Activity Recognition over Interpretable Models / C. Bettini, G. Civitarese, M. Fiori - In: 2021 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)[s.l] : IEEE, 2021. - ISBN 9781665404242. - pp. 32-37 (( convegno CoMoRea tenutosi a Kassel nel 2021 [10.1109/PerComWorkshops51409.2021.9430955].

Explainable Activity Recognition over Interpretable Models

C. Bettini;G. Civitarese
;
2021

Abstract

The majority of the approaches to sensor-based activity recognition are based on supervised machine learning. While these methods reach high recognition rates, a major challenge is to understand the rationale behind the predictions of the classifier. Indeed, those predictions may have a relevant impact on the follow-up actions taken in a smart living environment. We propose a novel approach for eXplainable Activity Recognition (XAR) based on interpretable machine learning models. We generate explanations by combining the feature values with the feature importance obtained from the underlying trained classifier. A quantitative evaluation on a real dataset of ADLs shows that our method is effective in providing explanations consistent with common knowledge. By comparing two popular ML models, our results also show that one versus one classifiers can provide better explanations in our framework.
activity recognition; explainable artificial intelligence; smart-homes
Settore INF/01 - Informatica
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
21-CoMoRea.pdf

accesso riservato

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 464.9 kB
Formato Adobe PDF
464.9 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
09430955.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 1.08 MB
Formato Adobe PDF
1.08 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/2434/848683
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact