计算机科学
可扩展性
加速
网络列表
并行计算
自动测试模式生成
多核处理器
绘图
过程(计算)
计算机体系结构
人工智能
嵌入式系统
程序设计语言
操作系统
电子线路
电气工程
工程类
作者
Xiao Lin,Liyang Lai,Huawei Li
标识
DOI:10.1109/itc-asia53059.2021.9808580
摘要
Static learning is a learning algorithm for finding additional implicit implications between gates in a netlist. In automatic test pattern generation (ATPG) the learned implications help recognize conflicts and redundancies early, and thus greatly improve the performance of ATPG. Though ATPG can further benefit from multiple runs of incremental or dynamic learning, it is only feasible when the learning process is fast enough. In the paper, we study speeding up static learning through parallelization on heterogeneous computing platform, which includes multi-core microprocessors (CPUs), and graphics processing units (GPUs). We discuss the advantages and limitations in each of these architectures. With their specific features in mind, we propose two different parallelization strategies that are tailored to multi-core CPUs and GPUs. Speedup and performance scalability of the two proposed parallel algorithms are analyzed. As far as we know, this is the first time that parallel static learning is studied in the literature.
科研通智能强力驱动
Strongly Powered by AbleSci AI