心理学
人工智能
数学教育
计算机科学
自然语言处理
语言学
哲学
作者
Grigorios Tzionis,Gerasimos Antzoulatos,Periklis Papaioannou,Αthanasios Mavropoulos,Ilias Gialampoukidis,Marta González Burgos,Stefanos Vrochidis,Ioannis Kompatsiaris,Maro Vlachopoulou
出处
期刊:Lecture notes in networks and systems
日期:2024-01-01
卷期号:: 351-362
标识
DOI:10.1007/978-3-031-54327-2_36
摘要
With the increasing prevalence of AI, significant advancements have been made across various domains, such as healthcare, learning, industry, etc. However, challenges persist in terms of trusting and comprehending the outcomes generated by these technologies. Specifically in the language learning domain, teachers face challenges regarding the classification of the students’ learning capabilities and build the appropriate learning path for them. To address these challenges, the concept of Explainable Artificial Intelligence (XAI) was adopted, which is a set of processes and methods that allows human users to interpret, understand and trust the results derived from machine learning models. In this study, we adopt two well-known XAI algorithms, PFI and SHAP in a proposed Knowledge Generation Model equipped with ML models to derive hidden knowledge. The whole framework has been applied and evaluated on the Language Learning Classification of Spanish Tertiary Education Students acquired from the CEDEL2 database. The analysis concludes that in terms of explaining the black-box models, the SHAP model-agnostic method is the most comprehensive and dominant for visualizing feature interactions and feature importance and be applicable to any type of data.
科研通智能强力驱动
Strongly Powered by AbleSci AI