礼貌
语用学
语言学
计算机科学
心理学
社会学
人机交互
认知科学
哲学
标识
DOI:10.1080/10494820.2024.2362829
摘要
Upon rapid evolution, ChatGPT can now generate content that is linguistically accurate and logically sound, while sidestepping ethical, social and legal concerns. This research seeks to investigate whether ChatGPT will employ different pragmatic strategies in its responses to (im)polite questions. In our experiment, this AI-powered tool was instructed to answer 200 self-made questions over four (im)politeness levels, and the 200 responses were collected to go through linguistic and sentiment analysis. Triangulated data, together with typical examples, show that ChatGPT tends to give shorter and less positive answers to less polite questions, appearing to be less responsive when confronted with more blunt and offensive inquiries. This, to some extent, resembles how human beings react when treated impolitely. A tentative explanation may be that, given its nature as a large language model, ChatGPT mirrors human interaction in various scenarios, and draws on prevalent human communication tendencies. Thus, interacting with ChatGPT is more of a human-society interaction than human-machine communication in the real sense. Our research sheds light on the coined "human-machine pragmatics", i.e. how humans can best communicate with computers for the best informative and affective outcomes. The implications for language education are also discussed in the end.
科研通智能强力驱动
Strongly Powered by AbleSci AI