This thesis focuses on the relationship between the development of autonomous artificial intelligence systems (“AI systems”) and criminal law. In particular, it deals with the following question: do the laws governing product liability adequately address the criminal liability issues of AI systems? Non-human agents, including robotics and software agents, and especially those using advanced artificial intelligence, are becoming increasingly autonomous in terms of the complexity of tasks they can perform, their potential casual impacts on the world that are unmitigated by human agents, and the diminishing ability of human agents to understand, predict or control how they operate. From a legal perspective, one of the main problems facing this technology development is the uncertain status of liability for the effects caused by artificial agents. In some cases, AI systems are capable of operating not only without the supervision of a human agent, but also without an entirely defined operational modus. As a matter of facts, robots can be programmed to be independent, to “learn” from the environment and to experiment new strategies that cannot be predicted in advance. The problem involves legal as well as philosophical implications. On 16 February 2017, the European Parliament has adopted a Resolution with recommendations to the Commission on Civil Law Rules on Robotics, soliciting to reflect – in relation to autonomous robots – “whether the ordinary rules on liability are sufficient or whether it calls for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of robots”. The civil law has already started dealing with this problem, but the discussion is still at the beginning in the criminal law field. If an intelligent agent causes harms to individuals or property (e.g. killing or injuring a person, or destroying an asset), who should be held criminally responsible? The manufacturer, the owner or the AI system itself? The elaborate addresses the issue from the perspective of criminal liability of the manufacturer of artificial intelligence systems, through the enhancement of the traditional category of “allowed risk”. The development of the theme is accompanied by examples taken from the recent diffusion of self-driving vehicles, in order to help the reader to better understand the practical implications of AI systems.

INTELLIGENZA ARTIFICIALE E RESPONSABILITÀ PENALE PER DANNO DA PRODOTTO / R. Bertolesi ; tutor: G.L. Gatta ; coordinatore: C.R. Luzzati. DIPARTIMENTO DI SCIENZE GIURIDICHE "CESARE BECCARIA", 2020 Feb 12. 32. ciclo, Anno Accademico 2019. [10.13130/bertolesi-riccardo_phd2020-02-12].

INTELLIGENZA ARTIFICIALE E RESPONSABILITÀ PENALE PER DANNO DA PRODOTTO

R. Bertolesi
2020

Abstract

This thesis focuses on the relationship between the development of autonomous artificial intelligence systems (“AI systems”) and criminal law. In particular, it deals with the following question: do the laws governing product liability adequately address the criminal liability issues of AI systems? Non-human agents, including robotics and software agents, and especially those using advanced artificial intelligence, are becoming increasingly autonomous in terms of the complexity of tasks they can perform, their potential casual impacts on the world that are unmitigated by human agents, and the diminishing ability of human agents to understand, predict or control how they operate. From a legal perspective, one of the main problems facing this technology development is the uncertain status of liability for the effects caused by artificial agents. In some cases, AI systems are capable of operating not only without the supervision of a human agent, but also without an entirely defined operational modus. As a matter of facts, robots can be programmed to be independent, to “learn” from the environment and to experiment new strategies that cannot be predicted in advance. The problem involves legal as well as philosophical implications. On 16 February 2017, the European Parliament has adopted a Resolution with recommendations to the Commission on Civil Law Rules on Robotics, soliciting to reflect – in relation to autonomous robots – “whether the ordinary rules on liability are sufficient or whether it calls for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of robots”. The civil law has already started dealing with this problem, but the discussion is still at the beginning in the criminal law field. If an intelligent agent causes harms to individuals or property (e.g. killing or injuring a person, or destroying an asset), who should be held criminally responsible? The manufacturer, the owner or the AI system itself? The elaborate addresses the issue from the perspective of criminal liability of the manufacturer of artificial intelligence systems, through the enhancement of the traditional category of “allowed risk”. The development of the theme is accompanied by examples taken from the recent diffusion of self-driving vehicles, in order to help the reader to better understand the practical implications of AI systems.
12-feb-2020
Settore IUS/17 - Diritto Penale
GATTA, GIAN LUIGI
LUZZATI, CLAUDIO RAFFAELE
Doctoral Thesis
INTELLIGENZA ARTIFICIALE E RESPONSABILITÀ PENALE PER DANNO DA PRODOTTO / R. Bertolesi ; tutor: G.L. Gatta ; coordinatore: C.R. Luzzati. DIPARTIMENTO DI SCIENZE GIURIDICHE "CESARE BECCARIA", 2020 Feb 12. 32. ciclo, Anno Accademico 2019. [10.13130/bertolesi-riccardo_phd2020-02-12].
File in questo prodotto:
File Dimensione Formato  
phd_unimi_R11696.pdf

Open Access dal 10/08/2021

Tipologia: Tesi di dottorato completa
Dimensione 3.94 MB
Formato Adobe PDF
3.94 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/711618
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact