In [ 1 ], we introduced a novel definition for the epistemic opacity of AI systems. Building on this, we proposed a framework for reasoning about an agent’s epistemic attitudes toward a possibly opaque algorithm, investigating the necessary conditions for achieving epistemic transparency. Unfortunately, this logical framework faced several limitations, primarily due to its overly idealized nature and the absence of a formal representation of the inner structure of AI systems. In the present work, we address these limitations by providing a more in-depth analysis of classifiers using first-order evidence logic. This step significantly enhances the applicability of our definitions of epistemic opacity and transparency to machine learning systems.
A logical approach to algorithmic opacity / M. Petrolo, E. Kubyshkina, G. Primiero (CEUR WORKSHOP PROCEEDINGS). - In: BEWARE 2023 : Bias, Ethical AI, Explainability and the role of Logic and Logic Programming / [a cura di] G. Boella, F.A. D'Asaro, A. Dyoub, L. Gorrieri, F.A. Lisi, C. Manganini, G. Primiero. - [s.l] : CEUR-Ws, 2024. - pp. 89-95 (( convegno 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence tenutosi a Roma nel 2023.
A logical approach to algorithmic opacity
E. Kubyshkina;G. Primiero
2024
Abstract
In [ 1 ], we introduced a novel definition for the epistemic opacity of AI systems. Building on this, we proposed a framework for reasoning about an agent’s epistemic attitudes toward a possibly opaque algorithm, investigating the necessary conditions for achieving epistemic transparency. Unfortunately, this logical framework faced several limitations, primarily due to its overly idealized nature and the absence of a formal representation of the inner structure of AI systems. In the present work, we address these limitations by providing a more in-depth analysis of classifiers using first-order evidence logic. This step significantly enhances the applicability of our definitions of epistemic opacity and transparency to machine learning systems.File | Dimensione | Formato | |
---|---|---|---|
BEWARE-23 paper.pdf
accesso aperto
Tipologia:
Publisher's version/PDF
Dimensione
230.7 kB
Formato
Adobe PDF
|
230.7 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.