Recent breakthroughs in Artificial Intelligence (AI) obfuscate the boundaries between digital, physical, and social spaces, a trend expected to continue in the foreseeable future. Traditionally, software engineering has prioritized technical aspects, focusing on functional correctness and reliability while often neglecting broader societal implications. With the rise of software agents enabled by Large Language Models (LLMs) and capable of emulating human intelligence and perception, there is a growing recognition of the need for addressing socio-critical issues. Unlike technical challenges, these issues cannot be resolved through traditional, deterministic approaches due to their subjective nature and dependence on evolving factors such as culture and demographics. This paper dives into this problem and advocates the need for revising existing engineering principles and methodologies. We propose a conceptual framework for quality assurance where AI is not only the driver of socio-critical systems but also a fundamental tool in their engineering process. Such framework encapsulates pre-production and runtime workflows where LLM-based agents, so-called artificial doppelgängers, continuously assess and refine socio-critical systems ensuring their alignment with established societal standards.

A Conceptual Framework for Quality Assurance of LLM-based Socio-critical Systems / L. Baresi, M. Camilli, T. Dolci, G. Quattrocchi - In: ASE '24: Proceedings[s.l] : IEEE/ACM, 2024. - ISBN 979-8-4007-1248-7. - pp. 2314-2318 (( 39. International Conference on Automated Software Engineering Sacramento 2024 [10.1145/3691620.3695306].

A Conceptual Framework for Quality Assurance of LLM-based Socio-critical Systems

M. Camilli;G. Quattrocchi
Ultimo
2024

Abstract

Recent breakthroughs in Artificial Intelligence (AI) obfuscate the boundaries between digital, physical, and social spaces, a trend expected to continue in the foreseeable future. Traditionally, software engineering has prioritized technical aspects, focusing on functional correctness and reliability while often neglecting broader societal implications. With the rise of software agents enabled by Large Language Models (LLMs) and capable of emulating human intelligence and perception, there is a growing recognition of the need for addressing socio-critical issues. Unlike technical challenges, these issues cannot be resolved through traditional, deterministic approaches due to their subjective nature and dependence on evolving factors such as culture and demographics. This paper dives into this problem and advocates the need for revising existing engineering principles and methodologies. We propose a conceptual framework for quality assurance where AI is not only the driver of socio-critical systems but also a fundamental tool in their engineering process. Such framework encapsulates pre-production and runtime workflows where LLM-based agents, so-called artificial doppelgängers, continuously assess and refine socio-critical systems ensuring their alignment with established societal standards.
Software and its engineering → Extra-functional properties; Software verification and validation; Computing methodologies → Artificial intelligence
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
Settore INFO-01/A - Informatica
2024
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
3691620.3695306.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 600.22 kB
Formato Adobe PDF
600.22 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1227077
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex ND
social impact