玻尔兹曼机
人工智能
代表(政治)
计算机科学
集合(抽象数据类型)
语义学(计算机科学)
人工神经网络
知识表示与推理
偏爱
相关性(法律)
理论计算机科学
国家(计算机科学)
自然语言处理
机器学习
数学
算法
统计
政治
政治学
法学
程序设计语言
作者
Glenn Blanchette,Anthony Robins
标识
DOI:10.1093/logcom/exac104
摘要
Abstract Information present in any training set of vectors for machine learning can be interpreted in two different ways, either as whole states or as individual atomic units. In this paper, we show that these alternative information distributions are often inherently incongruent within the training set. When learning with a Boltzmann machine, modifications in the network architecture can select one type of distributional information over the other; favouring the activation of either state exemplar or atomic characteristics. This choice of distributional information is of relevance when considering the representation of knowledge in logic. Traditional logic only utilises preference that is the correlate of whole state exemplar frequency. We propose that knowledge representation derived from atomic characteristic activation frequencies is the correlate of compositional typicality, which currently has limited formal definition or application in logic. Further, we argue by counter-example, that any representation of typicality by ‘most preferred model semantics’ is inadequate. We provide a definition of typicality derived from the probability of characteristic features; based on neural network modelling.
科研通智能强力驱动
Strongly Powered by AbleSci AI