摘要
In today's academic world, some academicians, researchers and students have begun employing Artificial Intelligence (AI) language models, e.g., ChatGPT, in completing a variety of academic tasks, including generating ideas, summarising literature, and essay writing. However, the use of ChatGPT in academic settings is a controversial issue, leading to a severe concern about academic integrity and AI-assisted cheating, while scholarly communities still lack clear principles on using such innovation in academia. Accordingly, this study aims to understand the motivations driving academics and researchers to use ChatGPT in their work, and specifically the role of academic integrity in making up adoption behavior. Based on 702 responses retrieved from users of ResearchGate and Academia.edu, we found that ChatGPT usage is positively shaped by time-saving feature, e-word of mouth, academic self-efficacy, academic self-esteem, and perceived stress. In contrast, peer influence and academic integrity had a negative effect on usage. Intriguingly, academic integrity-moderated interactions of time-saving, self-esteem and perceived stress on ChatGPT usage are found to be significantly positive. Therefore, we suggest that stakeholders, including academic institutions, publishers and AI language models' programmers, should work together to specify necessary guidelines for the ethical use of AI chatbots in academic work and research.