信息瓶颈法
瓶颈
计算机科学
人工智能
人气
代表(政治)
深层神经网络
机器学习
深度学习
人工神经网络
分类学(生物学)
外部数据表示
数据科学
情报检索
聚类分析
嵌入式系统
心理学
社会心理学
植物
政治
政治学
法学
生物
作者
Shizhe Hu,Zhengzheng Lou,Xiaoqiang Yan,Yangdong Ye
标识
DOI:10.1109/tpami.2024.3366349
摘要
This survey is for the remembrance of one of the creators of the information bottleneck theory, Prof. Naftali Tishby, passing away at the age of 68 on August, 2021. Information bottleneck (IB), a novel information theoretic approach for pattern analysis and representation learning, has gained widespread popularity since its birth in 1999. It provides an elegant balance between data compression and information preservation, and improves its prediction or representation ability accordingly. This survey summarizes both the theoretical progress and practical applications on IB over the past 20-plus years, where its basic theory, optimization, extensive models and task-oriented algorithms are systematically explored. Existing IB methods are roughly divided into two parts: traditional and deep IB, where the former contains the IBs optimized by traditional machine learning analysis techniques without involving any neural networks, and the latter includes the IBs involving the interpretation, optimization and improvement of deep neural works (DNNs). Specifically, based on the technique taxonomy, traditional IBs are further classified into three categories: Basic, Informative and Propagating IB; While the deep IBs, based on the taxonomy of problem settings, contain Debate: Understanding DNNs with IB, Optimizing DNNs Using IB, and DNN-based IB methods. Furthermore, some potential issues deserving future research are discussed. This survey attempts to draw a more complete picture of IB, from which the subsequent studies can benefit.
科研通智能强力驱动
Strongly Powered by AbleSci AI