人气
创造力
生成语法
过程(计算)
质量(理念)
持续性
批判性思维
数学教育
高阶思维
生成模型
心理学
订单(交换)
计算机科学
标准化测试
知识管理
教育学
教学方法
人工智能
社会心理学
业务
生态学
哲学
认识论
财务
认知指导教学
生物
操作系统
作者
Adele Smolansky,Andrew Cram,Corina Raduescu,Sandris Zeivots,Elaine Huber,René F. Kizilcec
标识
DOI:10.1145/3573051.3596191
摘要
The sudden popularity and availability of generative AI tools, such as ChatGPT that can write compelling essays on any topic, code in various programming languages, and ace standardized tests across domains, raises questions about the sustainability of traditional assessment practices. To seize this opportunity for innovation in assessment practice, we conducted a survey to understand both the educators' and students' perspectives on the issue. We measure and compare attitudes of both stakeholders across various assessment scenarios, building on an established framework for examining the quality of online assessments along six dimensions. Responses from 389 students and 36 educators across two universities indicate moderate usage of generative AI, consensus for which types of assessments are most impacted, and concerns about academic integrity. Educators prefer adapted assessments that assume AI will be used and encourage critical thinking, but students' reaction is mixed, in part due to concerns about a loss of creativity. The findings show the importance of engaging educators and students in assessment reform efforts to focus on the process of learning over its outputs, higher-order thinking, and authentic applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI