We propose two methods that integrate justification logic, defeasible reasoning and numerical reasoning to lay the foundations for an explainable, reason-based neuro-symbolic architecture. The core idea behind the two methods is to model two different ways in which weighing default reasons can be formalized in justification logic. The two methods both assign weights to justification terms, i.e. modal-like terms that represent reasons for propositions. The first method obtains the values of these reasons solely on the basis of the extension-based operational semantics for default justification logic. This semantics handles default reasons in such a way that it extends consistent sets of reason-formula pairs as much as possible. The second method aims for a direct comparison of reasons, where the potential conflicts between default reasons are resolved by pooling together all the applicable reasons for or against propositions. Instead of applying default steps selectively in the fashion of the operational semantics, all available default reasons are applied simultaneously and interact directly with each other. We argue that the two methods show why combining justification logic, defeasible reasoning and numerical reasoning is an intuitive and promising logical approach to explainable neuro-symbolic integration.

A Logic of Weighted Reasons for Explainable Inference in AI / S. Pandzic, J. Graff (COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE). - In: Explainable Artificial Intelligence / [a cura di] L. Longo, S. Lapuschkin, C. Seifert. - [s.l] : Springer Science and Business Media Deutschland GmbH, 2024. - ISBN 9783031637964. - pp. 243-267 (( Intervento presentato al 2. convegno World Conference on Explainable Artificial Intelligence, xAI 2024 tenutosi a Valletta nel 2024 [10.1007/978-3-031-63797-1_13].

A Logic of Weighted Reasons for Explainable Inference in AI

S. Pandzic;
2024

Abstract

We propose two methods that integrate justification logic, defeasible reasoning and numerical reasoning to lay the foundations for an explainable, reason-based neuro-symbolic architecture. The core idea behind the two methods is to model two different ways in which weighing default reasons can be formalized in justification logic. The two methods both assign weights to justification terms, i.e. modal-like terms that represent reasons for propositions. The first method obtains the values of these reasons solely on the basis of the extension-based operational semantics for default justification logic. This semantics handles default reasons in such a way that it extends consistent sets of reason-formula pairs as much as possible. The second method aims for a direct comparison of reasons, where the potential conflicts between default reasons are resolved by pooling together all the applicable reasons for or against propositions. Instead of applying default steps selectively in the fashion of the operational semantics, all available default reasons are applied simultaneously and interact directly with each other. We argue that the two methods show why combining justification logic, defeasible reasoning and numerical reasoning is an intuitive and promising logical approach to explainable neuro-symbolic integration.
Defeasible reasoning; Justification logic; Neuro-symbolic integration; Weighted reasons
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
2024
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
Pandzic_Stipe_and_Joris_Graff_A logic of weighted reasons for explainable inference in AI_XAI2024.pdf

accesso riservato

Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Licenza: Nessuna licenza
Dimensione 526.6 kB
Formato Adobe PDF
526.6 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-031-63797-1_13.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Licenza: Nessuna licenza
Dimensione 377.14 kB
Formato Adobe PDF
377.14 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1189781
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex 2
social impact