计算机科学
个性化
再培训
资产(计算机安全)
修剪
人工智能
数据科学
机器学习
万维网
业务
计算机安全
国际贸易
农学
生物
作者
Alessandra Toniato,Alain C. Vaucher,Marzena Maria Lehmann,Torsten Luksch,Philippe Schwaller,Marco Stenta,Teodoro Laino
标识
DOI:10.1021/acs.chemmater.3c01406
摘要
The world is on the verge of a new industrial revolution, and language models are poised to play a pivotal role in this transformative era. Their ability to offer intelligent insights and forecasts has made them a valuable asset for businesses seeking a competitive advantage. The chemical industry, in particular, can benefit significantly from harnessing their power. Since 2016 already, language models have been applied to tasks such as predicting reaction outcomes or retrosynthetic routes. While such models have demonstrated impressive abilities, the lack of publicly available data sets with universal coverage is often the limiting factor for achieving even higher accuracies. This makes it imperative for organizations to incorporate proprietary data sets into their model training processes to improve their performance. So far, however, these data sets frequently remain untapped as there are no established criteria for model customization. In this work, we report a successful methodology for retraining language models on reaction outcome prediction and single-step retrosynthesis tasks, using proprietary, nonpublic data sets. We report a considerable boost in accuracy by combining patent and proprietary data in a multidomain learning formulation. This exercise, inspired by a real-world use case, enables us to formulate guidelines that can be adopted in different corporate settings to customize chemical language models easily.
科研通智能强力驱动
Strongly Powered by AbleSci AI