强化学习
计算机科学
人工智能
稳健性(进化)
适应度函数
隐马尔可夫模型
人工神经网络
机器学习
算法
遗传算法
生物化学
基因
化学
标识
DOI:10.1016/j.cma.2018.11.026
摘要
This paper presents a new meta-modeling framework that employs deep reinforcement learning (DRL) to generate mechanical constitutive models for interfaces. The constitutive models are conceptualized as information flow in directed graphs. The process of writing constitutive models is simplified as a sequence of forming graph edges with the goal of maximizing the model score (a function of accuracy, robustness and forward prediction quality). Thus meta-modeling can be formulated as a Markov decision process with well-defined states, actions, rules, objective functions and rewards. By using neural networks to estimate policies and state values, the computer agent is able to efficiently self-improve the constitutive model it generated through self-playing, in the same way AlphaGo Zero (the algorithm that outplayed the world champion in the game of Go) improves its gameplay. Our numerical examples show that this automated meta-modeling framework does not only produces models which outperform existing cohesive models on benchmark traction–separation data, but is also capable of detecting hidden mechanisms among micro-structural features and incorporating them in constitutive models to improve the forward prediction accuracy, both of which are difficult tasks to do manually.
科研通智能强力驱动
Strongly Powered by AbleSci AI