Human value detection consists in extracting human values from textual data. Being this a complex problem, the Semantic Evaluation 2023 workshop has dedicated one shared task, namely Task 4, to collect contributions and ideas on how to solve human value detection in arguments. This shared task has been organized as a challenge involving multiple teams, each of which have submitted an original solution. In this discussion paper, we present our team submission, reporting the system architecture employed and its performances1. By participating in SemEval 2023 Task 4, we noticed that none of the submitted solutions provide satisfying performances, hence we argue this task can still be considered an open issue. Therefore, we share the difficulties we experienced while trying to extract human values from arguments, and we provide a deep discussion on the types of error systems can make in this setting.
A Discussion on Open Issues Regarding Human Value Detection in Arguments / A. Ferrara, S. Picascia, E. Rocchetti (CEUR WORKSHOP PROCEEDINGS). - In: SEBD 2023 : 31st Symposium of Advanced Database Systems / [a cura di] D. Calvanese, C. Diamantini, G. Faggioli, N. Ferro, S. Marchesin, G. Silvello, L. Tanca. - [s.l] : CEUR-WS, 2023. - pp. 120-132 (( Intervento presentato al 31. convegno Symposium of Advanced Database Systems, SEBD 2023 tenutosi a Galzignano Terme nel 2023.
A Discussion on Open Issues Regarding Human Value Detection in Arguments
A. Ferrara
Primo
;S. Picascia
Secondo
;E. Rocchetti
Ultimo
2023
Abstract
Human value detection consists in extracting human values from textual data. Being this a complex problem, the Semantic Evaluation 2023 workshop has dedicated one shared task, namely Task 4, to collect contributions and ideas on how to solve human value detection in arguments. This shared task has been organized as a challenge involving multiple teams, each of which have submitted an original solution. In this discussion paper, we present our team submission, reporting the system architecture employed and its performances1. By participating in SemEval 2023 Task 4, we noticed that none of the submitted solutions provide satisfying performances, hence we argue this task can still be considered an open issue. Therefore, we share the difficulties we experienced while trying to extract human values from arguments, and we provide a deep discussion on the types of error systems can make in this setting.File | Dimensione | Formato | |
---|---|---|---|
SEBD2023___SuperASKE.pdf
accesso aperto
Tipologia:
Publisher's version/PDF
Dimensione
2.72 MB
Formato
Adobe PDF
|
2.72 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.