可执行文件
计算机科学
考试(生物学)
集合(抽象数据类型)
统计
数据挖掘
统计假设检验
回归检验
试验装置
算法
数学
软件
人工智能
程序设计语言
软件开发
古生物学
软件建设
生物
作者
Phyllis G. Frankl,Stewart N. Weiss
摘要
An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker.< >
科研通智能强力驱动
Strongly Powered by AbleSci AI