Recognizing daily activities with unobtrusive sensors in smart environments enables various healthcare applications. Monitoring how subjects perform activities at home and their changes over time can reveal early symptoms of health issues, such as cognitive decline. Most approaches in this field use deep learning models, which are often seen as black boxes mapping sensor data to activities. However, non-expert users like clinicians need to trust and understand these models' outputs. Thus, eXplainable AI (XAI) methods for Human Activity Recognition have emerged to provide intuitive natural language explanations from these models. Different XAI methods generate different explanations, and their effectiveness is typically evaluated through user surveys, that are often challenging in terms of costs and fairness. This paper proposes an automatic evaluation method using Large Language Models (LLMs) to identify, in a pool of candidates, the best XAI approach for non-expert users. Our preliminary results suggest that LLM evaluation aligns with user surveys.

Using Large Language Models to Compare Explainable Models for Smart Home Human Activity Recognition / M. Fiori, G. Civitarese, C. Bettini - In: UbiComp '24 / [a cura di] V. Kostakos, J. Kay, T. Hoang. - [s.l] : ACM, 2024 Oct. - ISBN 979-8-4007-1058-2. - pp. 881-884 (( convegno International Joint Conference on Pervasive and Ubiquitous Computing tenutosi a Melbourne nel 2024 [10.1145/3675094.3679000].

Using Large Language Models to Compare Explainable Models for Smart Home Human Activity Recognition

M. Fiori
Primo
;
G. Civitarese
Secondo
;
C. Bettini
Ultimo
2024

Abstract

Recognizing daily activities with unobtrusive sensors in smart environments enables various healthcare applications. Monitoring how subjects perform activities at home and their changes over time can reveal early symptoms of health issues, such as cognitive decline. Most approaches in this field use deep learning models, which are often seen as black boxes mapping sensor data to activities. However, non-expert users like clinicians need to trust and understand these models' outputs. Thus, eXplainable AI (XAI) methods for Human Activity Recognition have emerged to provide intuitive natural language explanations from these models. Different XAI methods generate different explanations, and their effectiveness is typically evaluated through user surveys, that are often challenging in terms of costs and fairness. This paper proposes an automatic evaluation method using Large Language Models (LLMs) to identify, in a pool of candidates, the best XAI approach for non-expert users. Our preliminary results suggest that LLM evaluation aligns with user surveys.
Human-centered computing → Empirical studies in ubiquitous and mobile computing; HCI design and evaluation methods
Settore INFO-01/A - Informatica
ott-2024
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
3675094.3679000.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 1.78 MB
Formato Adobe PDF
1.78 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1105811
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact