In [ 1 ], we introduced a novel definition for the epistemic opacity of AI systems. Building on this, we proposed a framework for reasoning about an agent’s epistemic attitudes toward a possibly opaque algorithm, investigating the necessary conditions for achieving epistemic transparency. Unfortunately, this logical framework faced several limitations, primarily due to its overly idealized nature and the absence of a formal representation of the inner structure of AI systems. In the present work, we address these limitations by providing a more in-depth analysis of classifiers using first-order evidence logic. This step significantly enhances the applicability of our definitions of epistemic opacity and transparency to machine learning systems.

A logical approach to algorithmic opacity / M. Petrolo, E. Kubyshkina, G. Primiero (CEUR WORKSHOP PROCEEDINGS). - In: BEWARE 2023 : Bias, Ethical AI, Explainability and the role of Logic and Logic Programming / [a cura di] G. Boella, F.A. D'Asaro, A. Dyoub, L. Gorrieri, F.A. Lisi, C. Manganini, G. Primiero. - [s.l] : CEUR-Ws, 2024. - pp. 89-95 (( convegno 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence tenutosi a Roma nel 2023.

A logical approach to algorithmic opacity

E. Kubyshkina;G. Primiero
2024

Abstract

In [ 1 ], we introduced a novel definition for the epistemic opacity of AI systems. Building on this, we proposed a framework for reasoning about an agent’s epistemic attitudes toward a possibly opaque algorithm, investigating the necessary conditions for achieving epistemic transparency. Unfortunately, this logical framework faced several limitations, primarily due to its overly idealized nature and the absence of a formal representation of the inner structure of AI systems. In the present work, we address these limitations by providing a more in-depth analysis of classifiers using first-order evidence logic. This step significantly enhances the applicability of our definitions of epistemic opacity and transparency to machine learning systems.
Transparent AI; epistemic opacity; epistemic logic; evidence models; neighborhood semantics
Settore M-FIL/02 - Logica e Filosofia della Scienza
2024
AIxIA
https://ceur-ws.org/Vol-3615/short4.pdf
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
BEWARE-23 paper.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 230.7 kB
Formato Adobe PDF
230.7 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1024396
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact