In online Problem-Based Learning (PBL), being able to provide immediate feedback to learners is invaluable, yet difficult to achieve. We examine how well an off-the-shelf Natural Language Processing (NLP) framework is able to detect the absence of an identified responsible stakeholder in ideas generated during security training. Part-of-Speech Tagging and Dependency Parsing are applied on contextualised written learner contributions, collected from a PBL environment and compare the results to an assessment performed by experts. Using grammatical analysis, we aim to detect the absence of an identified responsible stakeholder in collected contributions (n=1174) from two security domains. Four heuristics are compared, resulting in a precision of (PPV=0.929) on the best of these, sufficient to provide immediate feedback to learners. Our results suggest that for the purposes of scaffolding open-ended PBL exercises, off-the-shelf NLP frameworks can achieve good performance on responsible stakeholder identification.

Who Should Do It? Automatic Identification of Responsible Stakeholder in Writings During Training / M. Ruskov (PROCEEDINGS IEEE INTERNATIONAL CONFERENCE ON ADVANCED LEARNING TECHNOLOGIES). - In: 2023 IEEE International Conference on Advanced Learning Technologies (ICALT)[s.l] : IEEE, 2023. - ISBN 9798350300550. - pp. 344-346 (( Intervento presentato al 23. convegno IEEE International Conference on Advanced Learning Technologies, ICALT 2023 tenutosi a Orem nel 2023 [10.1109/ICALT58122.2023.00106].

Who Should Do It? Automatic Identification of Responsible Stakeholder in Writings During Training

M. Ruskov
2023

Abstract

In online Problem-Based Learning (PBL), being able to provide immediate feedback to learners is invaluable, yet difficult to achieve. We examine how well an off-the-shelf Natural Language Processing (NLP) framework is able to detect the absence of an identified responsible stakeholder in ideas generated during security training. Part-of-Speech Tagging and Dependency Parsing are applied on contextualised written learner contributions, collected from a PBL environment and compare the results to an assessment performed by experts. Using grammatical analysis, we aim to detect the absence of an identified responsible stakeholder in collected contributions (n=1174) from two security domains. Four heuristics are compared, resulting in a precision of (PPV=0.929) on the best of these, sufficient to provide immediate feedback to learners. Our results suggest that for the purposes of scaffolding open-ended PBL exercises, off-the-shelf NLP frameworks can achieve good performance on responsible stakeholder identification.
dependency parser; formative feedback; part-of-speech tagger; problem-based learning; security training
Settore INF/01 - Informatica
2023
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
ICALT_2023_Who_Should_Do_It_.pdf

accesso aperto

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 215.6 kB
Formato Adobe PDF
215.6 kB Adobe PDF Visualizza/Apri
Who_Should_Do_It_Automatic_Identification_of_Responsible_Stakeholder_in_Writings_During_Training.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 164.68 kB
Formato Adobe PDF
164.68 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1025514
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact