计算机科学
概化理论
可扩展性
人工神经网络
集成学习
人工智能
降维
维数之咒
深度学习
集合预报
机器学习
变压器
算法
模式识别(心理学)
数学
电压
统计
物理
量子力学
数据库
作者
Maria Nareklishvili,Marius Geitle
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-12
标识
DOI:10.1109/tnnls.2024.3357621
摘要
We propose deep ensemble transformers (DETs), a fast, scalable approach for dimensionality reduction problems. This method leverages the power of deep neural networks and employs cascade ensemble techniques as its fundamental feature extraction tool. To handle high-dimensional data, our approach employs a flexible number of intermediate layers sequentially. These layers progressively transform the input data into decision tree predictions. To further enhance prediction performance, the output from the final intermediate layer is fed through a feed-forward neural network architecture for final prediction. We derive an upper bound of the disparity between the generalization error and the empirical error and demonstrate that it converges to zero. This highlights the generalizability of our method to parameter estimation and feature selection problems. In our experimental evaluations, DETs outperform existing models in terms of prediction accuracy, representation learning ability, and computational time. Specifically, the method achieves over 95% accuracy in gene expression data and can be trained on average 50% faster than traditional artificial neural networks (ANNs).
科研通智能强力驱动
Strongly Powered by AbleSci AI