可解释性
计算机科学
一般化
困境
量子
人工神经网络
正规化(语言学)
人工智能
机器学习
数学
几何学
量子力学
物理
数学分析
作者
Qian Yang,Xinbiao Wang,Yukai Du,Xingyao Wu,Dacheng Tao
标识
DOI:10.1109/tnnls.2022.3208313
摘要
The core of quantum machine learning is to devise quantum models with good trainability and low generalization error bounds than their classical counterparts to ensure better reliability and interpretability. Recent studies confirmed that quantum neural networks (QNNs) have the ability to achieve this goal on specific datasets. In this regard, it is of great importance to understand whether these advantages are still preserved on real-world tasks. Through systematic numerical experiments, we empirically observe that current QNNs fail to provide any benefit over classical learning models. Concretely, our results deliver two key messages. First, QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets. Second, the trainability of QNNs is insensitive to regularization techniques, which sharply contrasts with the classical scenario. These empirical results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
科研通智能强力驱动
Strongly Powered by AbleSci AI