机器学习
心理信息
人工智能
口译(哲学)
计算机科学
相关性(法律)
集合(抽象数据类型)
人工神经网络
多样性(控制论)
随机森林
变量(数学)
变量
心理学
认知心理学
梅德林
数学
数学分析
程序设计语言
法学
政治学
作者
Mirka Henninger,Rudolf Debelak,Yannick Rothacher,Carolin Strobl
摘要
In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
科研通智能强力驱动
Strongly Powered by AbleSci AI