Python(编程语言)
计算机科学
人工智能
遗传程序设计
背景(考古学)
地形
领域(数学分析)
强化学习
编码(集合论)
机器学习
程序设计语言
地理
数学分析
地图学
数学
考古
集合(抽象数据类型)
作者
Joel Lehman,Jonathan Gordon,Shawn Jain,Kamal Ndousse,Cathy Yeh,Kenneth O. Stanley
出处
期刊:Genetic and evolutionary computation
日期:2023-11-01
卷期号:: 331-366
被引量:20
标识
DOI:10.1007/978-981-99-3814-8_11
摘要
This chapter pursues the insightInsight that large language modelsLarge language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP). Because such LLMs benefit from training data that includes sequential changes and modifications, they can approximate likely changes that humans would make. To highlight the breadth of implications of such evolution through large models (ELM), inEvolution through Large Models the main experiment ELM combined with MAP-ElitesMAP-Elites generates hundreds of thousands of functional examples of Python programs that output working ambulating robots in the SodaraceSodarace domain, which the original LLM had never seen in pretraining. These examples then help to bootstrapBootstrap training a new conditional language model that can output the right walker for a particular terrain. The ability to bootstrapBootstrap new models that can output appropriate artifacts for a given context in a domain where zero training data was previously available carries implications for open-endednessOpen-endedness, deep learning, and reinforcement learningReinforcement Learning. These implications are explored here in depth in the hope of inspiring new directions of research now opened up by ELM.
科研通智能强力驱动
Strongly Powered by AbleSci AI