A Validation Methodology for XAI Decision SupportSystems Against Relational Domain Properties.
- Post by: stefano_lusetti
- 6 Ottobre 2025
- No Comment
The global adoption of artificial intelligence (AI) has increased dramatically in recent years, becoming commonplace in manyfields. Such a pervasiveness has led to changes in how AI is perceived, strengthening discussions on its societal consequences.Thus, a new class of requirements for AI-based solutions emerged. Broadly speaking, those on “explainability” aim to providea transparent representation of the (often opaque) reasoning method that an AI-based solution uses when prompted. This workpresents a methodology for validating a class of explainable AI (XAI) models, called deterministic rule-based models, which areused for expressing an explainable approximation of classifiers based on machine learning. The validation methodology com-bines logical deduction with constraint-based reasoning in numerical domains, and it either succeeds or returns quantitativeestimations of the invalid deviations found. This information allows us to assess the correctness of an XAI model, or in the caseof deviations, to evaluate if it still can be deemed acceptable. The validation methodology has been applied to a simulation-basedstudy where the decision-making process copes with the spread of SARS- COV-2 inside a railway station. The considered casestudy is a controlled but nontrivial example that shows the overall applicability of the methodology.
E. De Angelis, G. De Angelis, M. MOngelli, M. Proietti: A Validation Methodology for XAI Decision SupportSystems Against Relational Domain Properties. J. Softw. Evol. Process. 37(10) (2025)
Data di pubblicazione: 6 Ottobre 2025Link esterno: http://dx.doi.org/10.1002/smr.70054