Understanding dimensions of trust in AI through quantitative cognition: Implications for human-AI collaboration.

Journal: PloS one
Published Date:

Abstract

Human-AI collaborative innovation relies on effective and clearly defined role allocation, yet empirical research in this area remains limited. To address this gap, we construct a cognitive taxonomy trust in AI framework to describe and explain its interactive mechanisms in human-AI collaboration, specifically its complementary and inhibitive effects. Specifically, we examine the alignment between trust in AI and different cognitive levels, identifying key drivers that facilitate both lower-order and higher-order cognition through AI. Furthermore, by analyzing the interactive effects of multidimensional trust in AI, we explore its complementary and inhibitive influences. We collected data from finance and business administration interns using surveys and the After-Action Review method and analyzed them using the gradient descent algorithm. The findings reveal a dual effect of trust in AI on cognition: while functional and emotional trust enhance higher-order cognition, the transparency dimension of cognitive trust inhibits cognitive processes. These insights provide a theoretical foundation for understanding trust in AI in human-AI collaboration and offer practical guidance for university-industry partnerships and knowledge innovation.

Authors

  • Weizheng Jiang
    School of Management, Wuhan University of Science and Technology, Wuhan, China.
  • Dongqin Li
    School of Management, Wuhan Technology and Business University, Wuhan, China.
  • Chun Liu