辍学(神经网络)
人工神经网络
单一制国家
计算机科学
反向传播
一般化
深度学习
圆形合奏
算法
巨量平行
人工智能
酉矩阵
数学
机器学习
数学分析
并行计算
政治学
法学
作者
Yong-Liang Xiao,Sikun Li,Guohai Situ,Zhisheng You
出处
期刊:Optics Letters
[The Optical Society]
日期:2021-09-14
卷期号:46 (20): 5260-5260
被引量:10
摘要
Unitary learning is a backpropagation (BP) method that serves to update unitary weights in fully connected deep complex-valued neural networks, meeting a prior unitary in an active modulation diffractive deep neural network. However, the square matrix characteristic of unitary weights in each layer results in its learning belonging to a small-sample training, which produces an almost useless network that has a fairly poor generalization capability. To alleviate such a serious over-fitting problem, in this Letter, optical random phase dropout is formulated and designed. The equivalence between unitary forward and diffractive networks deduces a synthetic mask that is seamlessly compounded with a computational modulation and a random sampling comb called dropout. The dropout is filled with random phases in its zero positions that satisfy the Bernoulli distribution, which could slightly deflect parts of transmitted optical rays in each output end to generate statistical inference networks. The enhancement of generalization benefits from the fact that massively parallel full connection with different optical links is involved in the training. The random phase comb introduced into unitary BP is in the form of conjugation, which indicates the significance of optical BP.
科研通智能强力驱动
Strongly Powered by AbleSci AI