持久同源性
拓扑数据分析
拓扑(电路)
歧管(流体力学)
可微函数
拓扑流形
计算机科学
空格(标点符号)
编码(内存)
拓扑空间的范畴
拓扑空间
构造(python库)
模式识别(心理学)
数学
人工智能
算法
纯数学
组合数学
拓扑张量积
工程类
化学
程序设计语言
操作系统
基因
机械工程
生物化学
功能分析
作者
Michael Moor,Max Horn,Bastian Rieck,Karsten Borgwardt
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:52
标识
DOI:10.48550/arxiv.1906.00722
摘要
We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.
科研通智能强力驱动
Strongly Powered by AbleSci AI