University of Science and Technology of China, China
As artificial intelligence (AI) systems become increasingly integrated into social and decision-making contexts, understanding how trust is established between humans and AI is essential. This study investigates whether humans and AI differ in how they form and adjust trust based on interactive experience. Using a 50-round trust game, we examined investment behaviors of human participants and ChatGPT-3.5 when paired with either high- or low-trustworthy partners framed as human or AI agents. While humans adjusted trust dynamically—investing more in high-trustworthy and less in low-trustworthy partners over time—AI failed to differentiate partner trustworthiness. Instead, AI consistently preferred human over AI partners, regardless of their behaviour. Notably, humans also demonstrated trust adaptation even when each trial involved a different partner, suggesting generalised social learning beyond specific partner history. In contrast, AI's trust behaviour remained static and unresponsive to contextual changes or unexpected outcomes. These results highlight a fundamental divergence in how humans and AI process social information: humans engage in experience-driven trust calibration, whereas AI responses reflect static patterns that do not update with experience. This misalignment raises critical concerns about the real-world deployment of AI in trust-sensitive roles. Users may mistakenly interpret AI’s apparent prosociality as adaptive, leading to misplaced confidence in its decision-making. We conclude that perceived trustworthiness in AI does not imply behavioural reliability, underscoring the need for transparent AI design and regulation in domains where trust is essential.
Yuzhan Hang is a researcher at the University of Science and Technology of China, focusing on scientific innovation and contributing to advancements in his field through academic research and collaboration.