Artificial Intelligence (AI) systems are not intrinsically neutral and biases trickle in any type of tech- nological tool. In particular when dealing with people, the impact of AI algorithms’ technical errors originating with mislabeled data is undeniable. As they feed wrong and discriminatory classifications, these systems are not systematically guarded against bias. In this article we consider the problem of bias in AI systems from the point of view of data quality dimensions. We highlight the limited model construction of bias mitigation tools based on accuracy strategy, illustrating potential improvements of a specific tool in gender classification errors occurring in two typically difficult contexts: the classification of non-binary individuals, for which the label set becomes incomplete with respect to the dataset; and the classification of transgender individuals, for which the dataset becomes inconsistent with respect to the label set. Using formal methods for reasoning about the behavior of the classification system in presence of a changing world, we propose to reconsider the fairness of the classification task in terms of completeness, consistency, timeliness and reliability, and offer some theoretical results.

Data Quality Dimensions for Fair AI / C. Quaresmini, G. Primiero (CEUR WORKSHOP PROCEEDINGS). - In: AEQUITAS 2024 : Fairness and Bias in AI / [a cura di] R. Calegari, V. Dignum, B. O'Sullivan. - [s.l] : CEUR, 2024 Nov. - pp. 1-16 (( convegno 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024) tenutosi a Santiago de Compostela nel 2024.

Data Quality Dimensions for Fair AI

G. Primiero
2024

Abstract

Artificial Intelligence (AI) systems are not intrinsically neutral and biases trickle in any type of tech- nological tool. In particular when dealing with people, the impact of AI algorithms’ technical errors originating with mislabeled data is undeniable. As they feed wrong and discriminatory classifications, these systems are not systematically guarded against bias. In this article we consider the problem of bias in AI systems from the point of view of data quality dimensions. We highlight the limited model construction of bias mitigation tools based on accuracy strategy, illustrating potential improvements of a specific tool in gender classification errors occurring in two typically difficult contexts: the classification of non-binary individuals, for which the label set becomes incomplete with respect to the dataset; and the classification of transgender individuals, for which the dataset becomes inconsistent with respect to the label set. Using formal methods for reasoning about the behavior of the classification system in presence of a changing world, we propose to reconsider the fairness of the classification task in terms of completeness, consistency, timeliness and reliability, and offer some theoretical results.
No
English
Bias mitigation; Fairness; Information Quality; Mislabeling; Timeliness
Settore PHIL-02/A - Logica e filosofia della scienza
Intervento a convegno
Esperti anonimi
Pubblicazione scientifica
   BIAS, RISK, OPACITY in AI: design, verification and development of Trustworthy AI
   BRIO
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2020SSKZ7R_001

   Simulation of Probabilistic Systems for the Age of the Digital Twin
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   20223E8Y4X_001
AEQUITAS 2024 : Fairness and Bias in AI
R. Calegari, V. Dignum, B. O'Sullivan
CEUR
nov-2024
1
16
16
3808
Volume a diffusione internazionale
2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)
Santiago de Compostela
2024
https://ceur-ws.org/Vol-3808/paper12.pdf
manual
Aderisco
C. Quaresmini, G. Primiero
Book Part (author)
open
273
Data Quality Dimensions for Fair AI / C. Quaresmini, G. Primiero (CEUR WORKSHOP PROCEEDINGS). - In: AEQUITAS 2024 : Fairness and Bias in AI / [a cura di] R. Calegari, V. Dignum, B. O'Sullivan. - [s.l] : CEUR, 2024 Nov. - pp. 1-16 (( convegno 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024) tenutosi a Santiago de Compostela nel 2024.
info:eu-repo/semantics/bookPart
2
Prodotti della ricerca::03 - Contributo in volume
File in questo prodotto:
File Dimensione Formato  
paper12.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 397.52 kB
Formato Adobe PDF
397.52 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1116708
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact