Seminar information

Location: Roma

Date: 02/02/2023, 11:30 - 12:30

Speaker: Francesca Toni

Download documentation

Download:

Argumentation-based Interactive Explanations

It is widely acknowledged that transparency of automated decision making is crucial for deployability of intelligent systems, and explaining the reasons why some outputs are computed is a way to achieve this transparency. It is not surprising, then, that explainable AI (XAI) has witnessed unprecedented growth in recent years. However, the form that explanations should take to support the vision of XAI is unclear. In this talk I will explore two classes of explanations, which can be deemed ‘one shot/zero knowledge’ and ‘mechanistic’, respectively: the former focuses on the inputs contributing to decisions given in output, and is heavily represented in the literature (including feature attribution and counterfactual explanations); the latter reflects instead the internal functioning of the automated decision making fed with the inputs and computing those outputs. I will show how both classes of explanations can be supported by forms of computational argumentation, and will describe examples of argumentative XAI in some settings (e.g. for scheduling and machine learning). I will argue that these argumentative explanations lend themselves to being deployed in interactions with humans, thus paving the way towards contestable AI.