Large Language Models (LLMs) promise to enhance clinical decision-making, yet empirical studies reveal a paradox: physician performance with LLM assistance shows minimal improvement or even deterioration. This failure stems from an “acquiescence problem”: current LLMs passively confirm rather than challenge clinicians’ hypotheses, reinforcing cognitive biases such as anchoring and premature closure. To address these limitations, we propose a Dialogic Reasoning Framework that operationalizes Dialogical AI principles through a prototype implementation named “Diagnostic Dialogue” (DiDi). This framework operationalizes LLMs into three user-controlled roles: the Framework Coach (guiding structured reasoning), the Socratic Guide (asking probing questions), and the Red Team Partner (presenting evidence-based alternatives). Built upon Retrieval-Augmented Generation (RAG) architecture for factual grounding and traceability, this framework transforms LLMs from passive information providers into active reasoning partners that systematically mitigate cognitive bias. We evaluate the feasibility and qualitative impact of this framework through a pilot study (DiDi) deployed at Centro Chirurgico Toscano (CCT). Through purposive sampling of complex clinical scenarios, we present comparative case studies illustrating how the dialogic approach generates necessary cognitive friction to overcome acquiescence observed in standard LLM interactions. While rigorous clinical validation through randomized controlled trials remains necessary, this work establishes a methodological foundation for designing LLM-based clinical decision support systems that genuinely augment human clinical reasoning.

Dialogical AI for cognitive bias mitigation in medical diagnosis / L. Guiducci, C. Saulle, G.M. Dimitri, B. Valli, S. Alpini, C. Tenti, A. Rizzo. - In: APPLIED SCIENCES. - ISSN 2076-3417. - 16:2(2026 Jan 09), pp. 710.1-710.20. [10.3390/app16020710]

Dialogical AI for cognitive bias mitigation in medical diagnosis

G.M. Dimitri;
2026

Abstract

Large Language Models (LLMs) promise to enhance clinical decision-making, yet empirical studies reveal a paradox: physician performance with LLM assistance shows minimal improvement or even deterioration. This failure stems from an “acquiescence problem”: current LLMs passively confirm rather than challenge clinicians’ hypotheses, reinforcing cognitive biases such as anchoring and premature closure. To address these limitations, we propose a Dialogic Reasoning Framework that operationalizes Dialogical AI principles through a prototype implementation named “Diagnostic Dialogue” (DiDi). This framework operationalizes LLMs into three user-controlled roles: the Framework Coach (guiding structured reasoning), the Socratic Guide (asking probing questions), and the Red Team Partner (presenting evidence-based alternatives). Built upon Retrieval-Augmented Generation (RAG) architecture for factual grounding and traceability, this framework transforms LLMs from passive information providers into active reasoning partners that systematically mitigate cognitive bias. We evaluate the feasibility and qualitative impact of this framework through a pilot study (DiDi) deployed at Centro Chirurgico Toscano (CCT). Through purposive sampling of complex clinical scenarios, we present comparative case studies illustrating how the dialogic approach generates necessary cognitive friction to overcome acquiescence observed in standard LLM interactions. While rigorous clinical validation through randomized controlled trials remains necessary, this work establishes a methodological foundation for designing LLM-based clinical decision support systems that genuinely augment human clinical reasoning.
large language models; medical diagnosis; clinical decision support; cognitive bias; dialogic reasoning; retrieval-augmented generation; critical thinking
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
Settore INFO-01/A - Informatica
9-gen-2026
Article (author)
File in questo prodotto:
File Dimensione Formato  
applsci-16-00710-v2.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 294.42 kB
Formato Adobe PDF
294.42 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1217537
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact