随机投影
维数之咒
投影(关系代数)
人工神经网络
内在维度
计算机科学
非线性降维
维数(图论)
函数逼近
降维
近似算法
歧管(流体力学)
人工智能
数据空间
空格(标点符号)
近似理论
算法
数学
数学分析
组合数学
工程类
操作系统
机械工程
标识
DOI:10.1109/ijcnn.2018.8489215
摘要
Neural networks are often used to approximate functions defined over high-dimensional data spaces (e.g.text data, genomic data, multi-sensor data).Such approximation tasks are usually difficult due to the curse of dimensionality and improved methods are needed to deal with them effectively and efficiently.Since the data generally resides on a lower dimensional manifold various methods have been proposed to project the data first into a lower dimension and then build the neural network approximation over this lower dimensional projection data space.Here we follow this approach and combine it with the idea of weak learning through the use of random projections of the data.We show that random projection of the data works well and the approximation errors are smaller than in the case of approximation of the functions in the original data space.We explore the random projections with the aim to optimize this approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI