强化学习
钢筋
计算机科学
人工智能
心理学
社会心理学
作者
Xiaoxiao Liang,Yikang Ouyang,Haoyu Yang,Bei Yu,Yuzhe Ma
标识
DOI:10.1109/tcad.2023.3309745
摘要
Mask optimization is a vital step in the VLSI manufacturing process in advanced technology nodes. As one of the most representative techniques, optical proximity correction (OPC) is widely applied to enhance printability. Since conventional OPC methods consume prohibitive computational overhead, recent research has applied machine learning techniques for efficient mask optimization. However, existing discriminative learning models rely on a given dataset for supervised training, and generative learning models usually leverage a proxy optimization objective for end-to-end learning, which may limit the feasibility. In this article, we pioneer introducing the reinforcement learning (RL) model for mask optimization, which directly optimizes the preferred objective without leveraging a differentiable proxy. Intensive experiments show that our method outperforms state-of-the-art solutions, including academic approaches and commercial toolkits.
科研通智能强力驱动
Strongly Powered by AbleSci AI