Context-aware Human Activity Recognition (HAR) is a hot research area in mobile computing, and the most effective solutions in the literature are based on supervised deep learning models. However, the actual deployment of these systems is limited by the scarcity of labeled data that is required for training. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate this issue, by infusing common-sense knowledge about human activities and the contexts in which they can be performed into HAR deep learning classifiers. Existing NeSy methods for context-aware HAR rely on knowledge encoded in logic-based models (e.g., ontologies) whose design, implementation, and maintenance to capture new activities and contexts require significant human engineering efforts, technical knowledge, and domain expertise. Recent works show that pre-trained Large Language Models (LLMs) effectively encode common-sense knowledge about human activities. In this work, we propose ContextGPT: a novel prompt engineering approach to retrieve from LLMs common-sense knowledge about the relationship between human activities and the context in which they are performed. Unlike ontologies, ContextGPT requires limited human effort and expertise, while sharing similar privacy concerns if the reasoning is performed in the cloud. An extensive evaluation using two public datasets shows how a NeSy model obtained by infusing common-sense knowledge from ContextGPT is effective in data scarcity scenarios, leading to similar (and sometimes better) recognition rates than logic-based approaches with a fraction of the effort.
ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity Recognition Models / L. Arrotta, C. Bettini, G. Civitarese, M. Fiori - In: 2024 IEEE International Conference on Smart Computing (SMARTCOMP)[s.l] : IEEE, 2024. - ISBN 979-8-3503-4994-8. - pp. 55-62 (( convegno IEEE International Conference on Smart Computing (SMARTCOMP) : 29 June - 2 July tenutosi a Osaka nel 2024 [10.1109/smartcomp61445.2024.00029].
ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity Recognition Models
L. Arrotta;C. Bettini;G. Civitarese;M. Fiori
2024
Abstract
Context-aware Human Activity Recognition (HAR) is a hot research area in mobile computing, and the most effective solutions in the literature are based on supervised deep learning models. However, the actual deployment of these systems is limited by the scarcity of labeled data that is required for training. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate this issue, by infusing common-sense knowledge about human activities and the contexts in which they can be performed into HAR deep learning classifiers. Existing NeSy methods for context-aware HAR rely on knowledge encoded in logic-based models (e.g., ontologies) whose design, implementation, and maintenance to capture new activities and contexts require significant human engineering efforts, technical knowledge, and domain expertise. Recent works show that pre-trained Large Language Models (LLMs) effectively encode common-sense knowledge about human activities. In this work, we propose ContextGPT: a novel prompt engineering approach to retrieve from LLMs common-sense knowledge about the relationship between human activities and the context in which they are performed. Unlike ontologies, ContextGPT requires limited human effort and expertise, while sharing similar privacy concerns if the reasoning is performed in the cloud. An extensive evaluation using two public datasets shows how a NeSy model obtained by infusing common-sense knowledge from ContextGPT is effective in data scarcity scenarios, leading to similar (and sometimes better) recognition rates than logic-based approaches with a fraction of the effort.File | Dimensione | Formato | |
---|---|---|---|
24-smartcomp.pdf
accesso riservato
Tipologia:
Pre-print (manoscritto inviato all'editore)
Dimensione
819.06 kB
Formato
Adobe PDF
|
819.06 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.