任务(项目管理)
元认知
心理学
精化
质量(理念)
认知心理学
分辨率(逻辑)
独创性
认知
数学教育
计算机科学
人工智能
社会心理学
创造力
神经科学
经济
管理
哲学
认识论
人文学科
作者
Marek Urban,Filip Děchtěrenko,Jiří Lukavský,Veronika Hrabalová,Filip Svacha,Cyril Brom,Kamila Urban
标识
DOI:10.1016/j.compedu.2024.105031
摘要
University students often employ generative artificial intelligence tools such as ChatGPT in resolution of ill-defined problem-solving tasks. However, the experimental evidence about effects of ChatGPT on complex problem-solving performance is still missing. In this preregistered experiment, the impact of ChatGPT on performance in a complex creative problem-solving task was investigated in 77 university students solving a task with ChatGPT in comparison to 68 students solving a task without it. ChatGPT use significantly improved self-efficacy for task resolution (d = 0.65) and enhanced the quality (d = 0.69), elaboration (d = 0.61), and originality (d = 0.55) of solutions. Moreover, participants with ChatGPT assistance perceived task as easier (d = 0.56) and requiring less mental effort (d = 0.58). However, use of ChatGPT did not make task resolution more interesting (d = 0.08), and the impact of ChatGPT on metacognitive monitoring accuracy was unclear. Although there were no significant differences in absolute accuracy between students solving the task with and without the assistance of ChatGPT, the absence of correlation between self-evaluation judgments and performance suggests that participants struggled to calibrate their self-evaluations when using ChatGPT. Notably, the perceived usefulness of ChatGPT appeared to inform self-evaluation judgments, resulting in higher inaccuracy. The implications for hybrid human-AI regulation (HHAIR) theory are discussed. To regulate effectively, students using AI tools should focus on valid metacognitive cues instead of the perceived ease of ChatGPT-assisted problem-solving.
科研通智能强力驱动
Strongly Powered by AbleSci AI