Machine Learning (ML) systems, whether predictive or generative, not only reproduce biases and stereotypes but, even more worryingly, amplify them. Strategies for bias detection and mitigation typically focus on either ex post or ex ante approaches, but are always limited to two steps analyses. In this paper, we introduce the notion of Bias Amplification Chain (BAC) as a series of steps in which bias may be amplified during the design, development and deployment phases of trained models. We provide an application to such notion in the credit scoring setting and a quantitative analysis through the BRIO tool.

Bias Amplification Chains in ML-based Systems with an Application to Credit Scoring / A.G. Buda, G. Coraglia, F.A. Genco, C. Manganini, G. Primiero (CEUR WORKSHOP PROCEEDINGS). - In: BEWARE 2024 : Bias, Risk, Explainability, Ethical AI and the role of Logic and Logic Programming 2024 / [a cura di] G. Coraglia, F. A. D'Asaro, A. Dyoub, F. A. Lisi, G. Primiero. - [s.l] : CEUR-WS, 2024 Dec 22. - pp. 1-9 (( Intervento presentato al 3. convegno BEWARE 2024 : Bias, Risk, Explainability, Ethical AI and the role of Logic and Logic Programming 2024 tenutosi a Bolzano nel 2024.

Bias Amplification Chains in ML-based Systems with an Application to Credit Scoring

G. Coraglia
Secondo
;
F.A. Genco;C. Manganini
Penultimo
;
G. Primiero
Ultimo
2024

Abstract

Machine Learning (ML) systems, whether predictive or generative, not only reproduce biases and stereotypes but, even more worryingly, amplify them. Strategies for bias detection and mitigation typically focus on either ex post or ex ante approaches, but are always limited to two steps analyses. In this paper, we introduce the notion of Bias Amplification Chain (BAC) as a series of steps in which bias may be amplified during the design, development and deployment phases of trained models. We provide an application to such notion in the credit scoring setting and a quantitative analysis through the BRIO tool.
ML Fairness; Bias Amplification; Responsible AI
Settore PHIL-02/A - Logica e filosofia della scienza
Settore INFO-01/A - Informatica
   BIAS, RISK, OPACITY in AI: design, verification and development of Trustworthy AI
   BRIO
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2020SSKZ7R_001

   Simulation of Probabilistic Systems for the Age of the Digital Twin
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   20223E8Y4X_001

   Assegnazione Dipartimenti di Eccellenza 2023-2027 - Dipartimento di FILOSOFIA "PIERO MARTINETTI"
   DECC23_007
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
22-dic-2024
AI*IA
https://ceur-ws.org/Vol-3881/paper9.pdf
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
paper9.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 1.13 MB
Formato Adobe PDF
1.13 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1127941
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact