As Machine Learning (ML) systems are increasingly being used in critical domains, the need for a coherent framework to account for the discriminatory effects of their outcomes becomes urgent. In computer science, existing approaches tend to emphasise the role of ML design, presenting algorithmic fairness as a matter of making better design choices, especially at the level of the data used to train the model. This dissertation challenges the adequacy of such a design-centric perspective on the problem of unfair ML predictions. It does so by reconnecting it with broader and longer-standing issues within the philosophy of computational artefacts that have largely been overlooked in the current debate in the ethics of artificial intelligence. The primary contribution of this thesis is to reframe the analysis of algorithmic discrimination around the notions of use, maintenance, and repair of ML systems. Specifically, I argue that the correctness criteria of an ML system should be reformulated in terms of the contextual convergence of the diverse normative requirements of the agents who use it. Compared to other accounts of ML normativity, this reconceptualisation avoids succumbing to scepticism about implementation ascriptions, while returning a more dynamic and realistic understanding of how normative requirements circulate, feed back, conflict, and adapt across complex ML systems. Crucially, I claim that this shift has the advantage of allowing for a richer understanding of algorithmic fairness, viewing it as a plurality of data repair practices, rather than a static value embodied by certain ML designs. Two main formal contributions follow from this analysis: the introduction of validity criteria for ML predictions and the development of a novel logical framework to reason about the impact of errors in the input data on the fairness of algorithmic outcomes.

BIAS AND MISCOMPUTATION. A PHILOSOPHICAL AND FORMAL FRAMEWORK FOR MACHINE LEARNING UNFAIRNESS / C. Manganini ; tutor: G. Primiero ; coordinator: A. Sereni. Dipartimento di Filosofia Piero Martinetti, 2026 Apr 02. 38. ciclo, Anno Accademico 2024/2025.

BIAS AND MISCOMPUTATION. A PHILOSOPHICAL AND FORMAL FRAMEWORK FOR MACHINE LEARNING UNFAIRNESS

C. Manganini
2026

Abstract

As Machine Learning (ML) systems are increasingly being used in critical domains, the need for a coherent framework to account for the discriminatory effects of their outcomes becomes urgent. In computer science, existing approaches tend to emphasise the role of ML design, presenting algorithmic fairness as a matter of making better design choices, especially at the level of the data used to train the model. This dissertation challenges the adequacy of such a design-centric perspective on the problem of unfair ML predictions. It does so by reconnecting it with broader and longer-standing issues within the philosophy of computational artefacts that have largely been overlooked in the current debate in the ethics of artificial intelligence. The primary contribution of this thesis is to reframe the analysis of algorithmic discrimination around the notions of use, maintenance, and repair of ML systems. Specifically, I argue that the correctness criteria of an ML system should be reformulated in terms of the contextual convergence of the diverse normative requirements of the agents who use it. Compared to other accounts of ML normativity, this reconceptualisation avoids succumbing to scepticism about implementation ascriptions, while returning a more dynamic and realistic understanding of how normative requirements circulate, feed back, conflict, and adapt across complex ML systems. Crucially, I claim that this shift has the advantage of allowing for a richer understanding of algorithmic fairness, viewing it as a plurality of data repair practices, rather than a static value embodied by certain ML designs. Two main formal contributions follow from this analysis: the introduction of validity criteria for ML predictions and the development of a novel logical framework to reason about the impact of errors in the input data on the fairness of algorithmic outcomes.
2-apr-2026
Settore PHIL-02/A - Logica e filosofia della scienza
AI ethics; philosophy of technology; technical functions; machine learning; algorithmic fairness
PRIMIERO, GIUSEPPE
Doctoral Thesis
BIAS AND MISCOMPUTATION. A PHILOSOPHICAL AND FORMAL FRAMEWORK FOR MACHINE LEARNING UNFAIRNESS / C. Manganini ; tutor: G. Primiero ; coordinator: A. Sereni. Dipartimento di Filosofia Piero Martinetti, 2026 Apr 02. 38. ciclo, Anno Accademico 2024/2025.
File in questo prodotto:
File Dimensione Formato  
phd_unimi_R13840.pdf

accesso aperto

Descrizione: Doctoral thesis
Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Licenza: Creative commons
Dimensione 12.19 MB
Formato Adobe PDF
12.19 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1232464
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact