降维
主成分分析
等距映射
非线性降维
多维标度
维数之咒
公制(单位)
扩散图
还原(数学)
人工智能
非线性系统
集合(抽象数据类型)
自由度(物理和化学)
笔迹
计算机科学
数学
模式识别(心理学)
机器学习
物理
运营管理
几何学
量子力学
经济
程序设计语言
作者
Josh Tenenbaum,V. de Silva,John Langford
出处
期刊:Science
[American Association for the Advancement of Science (AAAS)]
日期:2000-12-22
卷期号:290 (5500): 2319-2323
被引量:12019
标识
DOI:10.1126/science.290.5500.2319
摘要
Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10(6) optic nerve fibers-a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.
科研通智能强力驱动
Strongly Powered by AbleSci AI