The algorithmic detection of disinformation online is currently based on two strategies: on the one hand, research focuses on automated fact- checking; on the other hand, models are being developed to assess the trustworthiness of information sources, including both empirical and theoretical research on credibility and content quality. For debates among experts, in particular, it might be hard to discern (less) reliable information, as all actors by definition are qualified. In these cases, the use of trustworthiness metrics on sources is a useful proxy for establishing the truthfulness of contents. We introduce an algorithmic model for automa-tically generating a dynamic trustworthiness hierarchy among informa-tion sources based on several parameters, including fact-checking. The method is novel and significant, especially in two respects: first, the generated hierarchy represents a helpful tool for laypeople to navigate experts’ debates; second, it also allows to identify and overcome biases generated by intuitive rankings held by agents at the beginning of the debates. We provide an experimental analysis of our algorithmic model applied to the debate on the SARS-CoV-2 virus, which took place among Italian medical specialists between 2020 and 2021.

A computational model for assessing experts’ trustworthiness / G. Primiero, D. Ceolin, F. Doneda. - In: JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE. - ISSN 0952-813X. - (2023), pp. 1-32. [Epub ahead of print] [10.1080/0952813X.2023.2183272]

A computational model for assessing experts’ trustworthiness

G. Primiero
Primo
;
F. Doneda
Ultimo
2023

Abstract

The algorithmic detection of disinformation online is currently based on two strategies: on the one hand, research focuses on automated fact- checking; on the other hand, models are being developed to assess the trustworthiness of information sources, including both empirical and theoretical research on credibility and content quality. For debates among experts, in particular, it might be hard to discern (less) reliable information, as all actors by definition are qualified. In these cases, the use of trustworthiness metrics on sources is a useful proxy for establishing the truthfulness of contents. We introduce an algorithmic model for automa-tically generating a dynamic trustworthiness hierarchy among informa-tion sources based on several parameters, including fact-checking. The method is novel and significant, especially in two respects: first, the generated hierarchy represents a helpful tool for laypeople to navigate experts’ debates; second, it also allows to identify and overcome biases generated by intuitive rankings held by agents at the beginning of the debates. We provide an experimental analysis of our algorithmic model applied to the debate on the SARS-CoV-2 virus, which took place among Italian medical specialists between 2020 and 2021.
Trustworthiness ranking; expert debate; fact-checking
Settore M-FIL/02 - Logica e Filosofia della Scienza
   Dipartimenti di Eccellenza 2018-2022 - Dipartimento di FILOSOFIA
   BRIO
   MINISTERO DELL'ISTRUZIONE E DEL MERITO

   BIAS, RISK, OPACITY in AI: design, verification and development of Trustworthy AI
   BRIO
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2020SSKZ7R_001
2023
1-mar-2023
Article (author)
File in questo prodotto:
File Dimensione Formato  
A computational model for assessing experts trustworthiness.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 2.96 MB
Formato Adobe PDF
2.96 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/956971
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact