The application of predictive algorithms and deep learning artificial intelligence (AI) will transform the medical field. In recent years, medical AI has focused on the diagnosis and decision-making support. Both are complex tasks, and physicians often simplify them by an automatic mental process risking suffering from biases. AI could be a solution to this problem but still has issues to face. This chapter proposes an ontological and ethical framework to tackle one of the difficulties presented by AI: the responsibility problem. We will focus on artificial neural networks (ANNs) because of their diffusion. ANNs are opaque to external scrutiny. This opacity leads to uncertainties in attributing responsibilities in case of failure: nobody seems sufficiently in control to be held accountable for AI. We argue that considering AI as a tool is not an option because of the lack of local control. After accepting a definition of an agent, which can include AI, we attribute responsibility to the human decision-maker and not to the manufacturing process. Simple solutions can be devised to distribute responsibility between ANN and human decision-makers, and we list a few.

Artificial Intelligence in the Medical Context: Who is the Agent in Charge? / E.M. Palmerini, C. Lucchiari - In: Multidisciplinarity and Interdisciplinarity in Health / [a cura di] N. Rezaei. - [s.l] : Springer, 2022. - ISBN 978-3-030-96813-7. - pp. 545-565 [10.1007/978-3-030-96814-4_24]

Artificial Intelligence in the Medical Context: Who is the Agent in Charge?

C. Lucchiari
Ultimo
2022

Abstract

The application of predictive algorithms and deep learning artificial intelligence (AI) will transform the medical field. In recent years, medical AI has focused on the diagnosis and decision-making support. Both are complex tasks, and physicians often simplify them by an automatic mental process risking suffering from biases. AI could be a solution to this problem but still has issues to face. This chapter proposes an ontological and ethical framework to tackle one of the difficulties presented by AI: the responsibility problem. We will focus on artificial neural networks (ANNs) because of their diffusion. ANNs are opaque to external scrutiny. This opacity leads to uncertainties in attributing responsibilities in case of failure: nobody seems sufficiently in control to be held accountable for AI. We argue that considering AI as a tool is not an option because of the lack of local control. After accepting a definition of an agent, which can include AI, we attribute responsibility to the human decision-maker and not to the manufacturing process. Simple solutions can be devised to distribute responsibility between ANN and human decision-makers, and we list a few.
Cognitive science; Artificial intelligence; ethics
Settore M-PSI/01 - Psicologia Generale
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
ai chapter 2022.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 434.84 kB
Formato Adobe PDF
434.84 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/2434/935527
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact