计算机科学
卷积神经网络
初始化
量子
量子位元
人工智能
理论计算机科学
算法
物理
量子力学
程序设计语言
作者
Arthur Pesah,M. Cerezo,Samson Wang,Tyler Volkoff,Andrew T. Sornborger,Patrick J. Coles
标识
DOI:10.1103/physrevx.11.041011
摘要
Quantum neural networks (QNNs) have generated excitement around the possibility of efficiently analyzing quantum data. But this excitement has been tempered by the existence of exponentially vanishing gradients, known as barren plateau landscapes, for many QNN architectures. Recently, quantum convolutional neural networks (QCNNs) have been proposed, involving a sequence of convolutional and pooling layers that reduce the number of qubits while preserving information about relevant data features. In this work, we rigorously analyze the gradient scaling for the parameters in the QCNN architecture. We find that the variance of the gradient vanishes no faster than polynomially, implying that QCNNs do not exhibit barren plateaus. This result provides an analytical guarantee for the trainability of randomly initialized QCNNs, which highlights QCNNs as being trainable under random initialization unlike many other QNN architectures. To derive our results, we introduce a novel graph-based method to analyze expectation values over Haar-distributed unitaries, which will likely be useful in other contexts. Finally, we perform numerical simulations to verify our analytical results.9 MoreReceived 12 March 2021Revised 13 July 2021Accepted 2 August 2021DOI:https://doi.org/10.1103/PhysRevX.11.041011Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasMachine learningQuantum algorithmsQuantum computationQuantum Information
科研通智能强力驱动
Strongly Powered by AbleSci AI