Czernietzki, Charlotte; Westmattelmann, Daniel; Schewe, Gerhard
Research article in digital collection (conference) | Peer reviewedOrganizations increasingly use AI-based systems to enhance decision-making quality and efficiency. To ensure their acceptance, these systems must be trusted, which is challenging due to their black-box nature. This study tackles this issue by investigating transparency’s role in building trust in AI-based systems within organizational settings. To ensure generalizability, we collected quantitative data (N = 978) across two scenarios differing in their degree of process automation (automated vs. augmented). Using structural equation modeling and necessary condition analysis, we analyzed the effect of a multidimensional conceptualization of transparency on trust. Our results demonstrate that the individual transparency dimensions not only positively affect trust but are also indispensable for its formation. Without specific minimum levels of these transparency dimensions, establishing trust in AI-based systems is fundamentally unachievable. This study advances the AI adoption literature by exploring the transparency-trust relationship from both sufficiency and necessity perspectives, thus guiding strategic AI implementation in organizations.
Czernietzki, Charlotte | Chair of Organization, Human Resource Management and Innovation Professorship for Innovation, Strategy and Organization (Prof. Foege) |
Schewe, Gerhard | Chair of Organization, Human Resource Management and Innovation |
Westmattelmann, Daniel | Chair of Organization, Human Resource Management and Innovation Professorship for Innovation, Strategy and Organization (Prof. Foege) |