The integration of machine learning (ML) into healthcare is accelerating, driven by the proliferation of biomedical data and the promise of data-driven clinical support. A key challenge in this context is managing the pervasive uncertainty inherent in medical reasoning and decision-making. Despite its recognized importance, uncertainty is often underrepresented in the design and evaluation of clinical AI systems. Here we report an editorial overview of a special issue dedicated to uncertainty modeling in medical AI, which gathers theoretical, methodological, and practical contributions addressing this critical gap. Across these works, authors reveal that fewer than 4% of studies address uncertainty explicitly, and propose alternative design principles—such as optimizing for clinical net benefit or embedding explainability with confidence estimates. Notable contributions include the RelAI system for real-time prediction reliability, empirical findings on how uncertainty communication shapes clinical interpretation, and benchmarks for out-of-distribution detection in tabular data. Furthermore, this issue highlights the use of causal reasoning and anomaly detection to enhance system robustness and accountability. Together, these studies argue that representing, communicating, and operationalizing uncertainty are essential not only for clinical safety but also for building trust in AI-driven care. This special issue thus repositions uncertainty from a limitation to a foundational asset in the responsible deployment of ML in healthcare.

Modeling unknowns: A vision for uncertainty-aware machine learning in healthcare / A. Campagner, E.M. Biganzoli, C. Balsano, C. Cereda, F. Cabitza. - In: INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS. - ISSN 1386-5056. - 203:(2025), pp. 106014.1-106014.6. [10.1016/j.ijmedinf.2025.106014]

Modeling unknowns: A vision for uncertainty-aware machine learning in healthcare

E.M. Biganzoli;C. Cereda
Penultimo
;
2025

Abstract

The integration of machine learning (ML) into healthcare is accelerating, driven by the proliferation of biomedical data and the promise of data-driven clinical support. A key challenge in this context is managing the pervasive uncertainty inherent in medical reasoning and decision-making. Despite its recognized importance, uncertainty is often underrepresented in the design and evaluation of clinical AI systems. Here we report an editorial overview of a special issue dedicated to uncertainty modeling in medical AI, which gathers theoretical, methodological, and practical contributions addressing this critical gap. Across these works, authors reveal that fewer than 4% of studies address uncertainty explicitly, and propose alternative design principles—such as optimizing for clinical net benefit or embedding explainability with confidence estimates. Notable contributions include the RelAI system for real-time prediction reliability, empirical findings on how uncertainty communication shapes clinical interpretation, and benchmarks for out-of-distribution detection in tabular data. Furthermore, this issue highlights the use of causal reasoning and anomaly detection to enhance system robustness and accountability. Together, these studies argue that representing, communicating, and operationalizing uncertainty are essential not only for clinical safety but also for building trust in AI-driven care. This special issue thus repositions uncertainty from a limitation to a foundational asset in the responsible deployment of ML in healthcare.
Machine learning; Medical artificial intelligence; Uncertainty
Settore MEDS-01/A - Genetica medica
2025
Article (author)
File in questo prodotto:
File Dimensione Formato  
239.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Licenza: Nessuna licenza
Dimensione 690.61 kB
Formato Adobe PDF
690.61 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1204315
Citazioni
  • ???jsp.display-item.citation.pmc??? 2
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 7
  • OpenAlex 10
social impact