The effectiveness of explainable AI on human factors in trust models.
Journal:
Scientific reports
Published Date:
Jul 2, 2025
Abstract
Explainable AI has garnered significant traction in science communication research. Prior empirical studies have firmly established that explainable AI communication could improve trust in AI and that trust in AI engineers was argued to be an under-explored dimension of trust. In this vein, trust in AI engineers was also found to be a factor in determining public perceptions and acceptance. Thus, a key question emerges: Can explainable AI improve trust in AI engineers? In this study, we set out to investigate the effects of explainability perception on trust in AI engineers, while accounting for trust in AI system. More concretely, through a public opinion survey in Singapore (N = 1,002), structural equation modelling analyses revealed that perceived explainability significantly shaped all trust in AI engineers' dimensions (i.e., ability, benevolence, and integrity). Results also revealed that trust in the ability of AI engineers subsequently shaped people's attitude and intention to use various types of autonomous passenger drones (i.e., tourism/daily commute/cross-country/city travel). Several serial mediation pathways through trust in ability and attitude were identified between explainability perception and use intention. Theoretical and practical implications are discussed.