ABSTRACT In the era of digital intelligence, companies need to capture more data to portray the user profile to “grab” the “heart” of the consumer, AI is undoubtedly a powerful tool to implement this process. Nevertheless, it has also gradually sparked concerns regarding user privacy and data. Recent studies have increasingly focused on user reactions to AI data capture, yet exploration in this area remains limited. This research contributes to the existing literature by investigating the underlying psychological processes and influencing factors that prompt users to confront AI data capture. Through four studies, we found the data capture strategies have a significant negative impact on users' intention to use AI systems, and compared with the overt data capture strategy, the covert strategy would make users' intention to use AI systems lower. The impact is mediated by psychological ownership, specifically, it is mediated by perceived control of psychological ownership rather than perceived possession. Additionally, prevention‐oriented users are more likely to feel deprived of their right to be informed and their control over data. However, AI explainability can increase users' psychological ownership and intention to use by alleviating their psychological defenses in the process. These results are conducive to promoting the resolution of AI data governance issues under digital intelligence empowerment, and providing a reference for reasonable strategies adopted by enterprises in using AI data capture.