Explanations and Familiarity in XAI: How users understand predictions and make decisions using an AI support system
Citation:
Celar, Lenart, Explanations and Familiarity in XAI: How users understand predictions and make decisions using an AI support system, Trinity College Dublin.School of Psychology, 2023Download Item:
Abstract:
We compared the effects of counterfactual and causal explanations for an Artificial Intelligence (AI) system?s decisions in a familiar domain (alcohol and driving) and an unfamiliar one (chemical safety) in four experiments (n=731). Participants were shown information given to an AI system, the decisions it made, and an explanation for each decision; they then attempted to predict the AI?s decisions (Experiments 1 and 2), or to make their own decisions (Experiments 3 and 4). The decisions the AI system made were correct (Experiments 1 and 3) or incorrect (Experiments 2 and 4). The results showed a dissociation between participants? subjective judgments that counterfactual explanations were more helpful than causal ones, and their objective accuracy in predicting the AI systems? decisions equally given counterfactual or causal explanations, extending previous research to show this dissociation occurred not only for a familiar domain but also an unfamiliar one; and only for an AI system that made correct decisions, not one that made incorrect decisions (Experiments 1 and 2). Importantly, the results showed the dissociation was eliminated when participants? made their own decisions rather than predicted the AI systems? decisions, that is, they tended to judge counterfactual explanations more helpful than causal ones, and also to make more accurate decisions given counterfactual explanations rather than causal ones; and they did so for familiar and unfamiliar domains, and only for an AI system that made correct decisions, not one that made incorrect decisions (Experiments 3 and 4). Participants judged explanations more helpful, and their judgments were more accurate, in the familiar domain than the unfamiliar one, only when the AI?s decisions were correct, i.e., Experiments 1 and 3. The implications for how people understand counterfactual explanations, and for their use in eXplainable AI (XAI) are discussed.
Sponsor
Grant Number
Google
Other
Description:
APPROVED
Author: Celar, Lenart
Advisor:
Byrne, RuthPublisher:
Trinity College Dublin. School of Psychology. Discipline of PsychologyType of material:
ThesisAvailability:
Full text availableMetadata
Show full item recordLicences: