特征(语言学)
胶囊
计算机科学
人工智能
模式识别(心理学)
地质学
古生物学
哲学
语言学
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-13
标识
DOI:10.1109/tnnls.2024.3443814
摘要
Both transformer and convolutional neural network (CNN) models require supplementary elements to acquire positional information. To address this issue, we propose a novel orthogonal capsule network (OthogonalCaps) that preserves location information during lightweight feature learning. The proposed network simplifies complex training processes and enables end-to-end training for object detection tasks. Specifically, there is no need to solve the regression problem of positions and the classification problem of objects separately, nor is there a need to encode the positional information as an additional token, as in transformer models. We generate the next capsule layer via orthogonality-based dynamic routing, which reduces the number of parameters and preserves positional information via its voting mechanism. Moreover, we propose Capsule ReLU as an activation function to avoid the problem of gradient vanishing and to facilitate capsule normalization across various scales, thus empowering OrthogonalCaps to better adapt to objects of diverse scales. The orthogonal capsule network (CapsNet) demonstrates an accuracy and run-time performance on a par with those of Faster R-CNN on the VOC dataset. Our network outperforms the baseline approach in detecting small-scale samples. The simulation results suggest that the proposed network surpasses other capsule network models in achieving a favorable balance between parameters and accuracy. Furthermore, an ablation experiment indicates that both Capsule ReLU and orthogonality-based dynamic routing play essential roles in enhancing the classification performance. The training code and pretrained models are available at https://github.com/l1ack/OrthogonalCaps.
科研通智能强力驱动
Strongly Powered by AbleSci AI