Toward machine learning-enhanced high-throughput experimentation for chemistry

吞吐量 计算机科学 化学 认知科学 人工智能 机器学习 纳米技术 心理学 材料科学 操作系统 无线
作者
Sarah Callaghan
出处
期刊:Patterns [Elsevier BV]
卷期号:2 (3): 100221-100221 被引量:7
标识
DOI:10.1016/j.patter.2021.100221
摘要

High-throughput experimentation in chemistry allows for quick and automated exploration of chemical space to, for example, discover new drugs. Combining machine learning techniques with high-throughput experimentation has the potential to speed up and improve chemical space exploration and optimization. High-throughput experimentation in chemistry allows for quick and automated exploration of chemical space to, for example, discover new drugs. Combining machine learning techniques with high-throughput experimentation has the potential to speed up and improve chemical space exploration and optimization. The Trends in Chemistry February 2021 issue is a special issue on machine learning (ML) for molecules and materials. This special issue is understandably targeted toward the domain science of chemistry, rather than having the data science focus of Patterns, but it does highlight the important ways that machine learning informs, bridges, and aids aspects of the synthesis, discovery, and optimization cycle for new molecules and materials. Performing these tasks has historically been extremely difficult, costly, and/or labor intensive, so the application of machine learning to speed up this process has the potential to drive progress in this field. The guest editors of this special issue are Prof. Rafael Gómez-Bombarelli and Dr. Alexander B. Wiltschko. The focus of this preview is the article "Toward Machine Learning-Enhanced High-Throughput Experimentation" by Eyke et al.1Eyke N.S. Koscher B.A. Jensen K.F. Toward Machine Learning-Enhanced High-Throughput Experimentation.Trends in Chemistry. 2021; 3: 120-132Abstract Full Text Full Text PDF Scopus (7) Google Scholar I chose it as a representative sample from the special issue, as it discusses many of the issues with data that are common across many fields, not just chemical discovery and synthesis. High-throughput experimentation (HTE) allows many parallel chemistry experiments to be conducted simultaneously and more efficiently by using a variety of automated routine chemical workflows. The resulting experiments are conducted uniformly and more cheaply, and the analysis datasets are generated consistently. This allows the properties of large chemical libraries to be screened quickly and cost efficiently, helpful in a field where many experiments are required to make discoveries. In the chemistry domain, much work has been done on ML-based experimental design tools and automated experimentation platforms. Combining these two methodologies has great potential to speed up and improve chemical space exploration and optimization. This combination also has the advantage of being self-reinforcing: the ML algorithms improve the efficiency with which the platforms can navigate chemical space, and the data that are collected on the platforms can be fed back into the ML models to improve their performance, although the most effective way of doing this combination is still up for debate. The article describes the developments in ML for chemistry that facilitate data processing, experimental design for maximally efficient experimentation, and applications such as synthesis planning. The authors also describe the latest experimental platforms, including advances in platform-level control systems, hardware implementation, and comprehensive data capture and analytics. Integration of automated analytical instruments that can generate a lot of information while preserving throughput, along with ML algorithms capable of automatic processing of the data, are a common theme in other physical science domains as well as chemistry. The authors point out that "systems that automatically upload the data to reaction databases and/or export it into standardized formats that can be included in the supplementary information of publications to facilitate later extraction would help overcome the issues with existing data." My feeling is that this is a good first step, and the use of community standards and commonly used and trusted data repositories is essential. I would encourage the community to investigate data sharing and archiving systems in other physical science domains and also not to relegate the important information about the data to the supplementary information. Data are first-class research objects and are an essential part of ensuring scientific verifiability and reproducibility. The discussion of automated HTE platforms acknowledges the fact that these platforms tend to be well suited to explore narrow chemical spaces, although efforts are ongoing to expand these spaces. Many powerful ML models have been reported in the literature for this domain also, but unsurprisingly, their accuracy and domain of applicability (DOA) is constrained by the available data. A completely automated synthesis platform depends on access to a model that can readily predict the "best" route to a target compound (where "best" can depend on a wide range of, sometimes conflicting, factors). Existing datasets often suffer from missing information, or dataset imbalance, and many need substantial data cleaning and curation to be suitable to use with ML techniques. As a result of these issues, existing synthesis planning tools are generally capable of suggesting viable routes but are unable to fully specify synthesis recipes. The article gives specific examples of these and describes familiar results for researchers trying to create general use models, in that models trained on one dataset perform badly on others. ML models require large amounts of data, and so researchers need to use pre-existing data. As is the case with so many experimental domains, the historical data available for chemical ML lack sufficient quality and/or relevance to fulfil objectives of interest. A strategy, common across domains, is to augment the available literature data to make them better suited to the task, which also means extracting data from the literature in a useful and standardized way. Quantity is not enough, however; data relevance and data quality are also vital aspects that need to be considered. Sometimes the community has no choice but to generate higher-quality data. As we all know, brute force methods for data generation are inefficient as well as inelegant, and computational models are expensive to run, not only in terms of time but also in terms of carbon and electricity. Efficient experimental design tools, whether they're based on new or pre-existing data, navigate the chemical space and avoid the collection of redundant information. These tools narrow the experiments to be run from the set of all possible experiments in a domain to find the balance between those that are most informative (exploration) and/or most likely to be optimal (exploitation). The focus on getting good quality data does have the benefit of an increasing community appreciation of the value of comprehensive data capture, aided by new initiatives such as the Open Reaction Database (https://docs.open-reaction-database.org), which aims not only to be a data repository but also to offer guidance on what kinds of data are useful to collect. As well as a discussion of data collection and quality, the article also outlines methods of merging ML with traditional statistical methods of optimal experimental design for navigation of high-dimensional chemical space. These include the following:•Traditional design of experiments (DOE) methods for reaction optimization tasks in a small design space involving a small number of primarily continuous variables.•Bayesian optimization (BO) using a Gaussian process (GP)-based surrogate model to relate the input variables to the objective, although this does come with a computational expense associated with fitting GPs and optimizing the acquisition function in high dimensions. It is becoming common to perform GP-based BO in a dimensionality-reduced space defined using some sort of autoencoder such as a variational autoencoder (VAE) or more traditional dimensionality reduction algorithms like principal-component analysis (PCA), as this allows higher input dimensionality. The combination of BO with generative models is also a popular area of chemical research in recent years.•Bayesian neural networks (BNNs) can also be used to construct the probabilistic surrogate model.•Traditional neural networks (NNs) and random forests (RFs) can also be used as surrogate models and are therefore useful in large design spaces with high input dimensionality, even though they are not innately probabilistic. Strategies for uncertainty estimation for NNs and RFs exist, allowing exploration-exploitation experimental design schemes analogous to those deployed for BO.•Other experimental design strategies mentioned include those based on reinforcement learning and divergence measures. Critically, the information that is recorded during experimentation directly determines the types of chemical models that can be constructed from the data. Many current HTE platforms for reaction screening achieve increased throughput by initially restricting the analysis to a small set of low-cost observables, and the most promising or interesting results are subsequently investigated in greater detail offline. While this tiered approach has yielded promising results, the information derived from the initial, high-throughput phase lacks enough detail to be useful for most general modeling tasks, so there is a balance that must be found between the resources needed to comprehensively analyze a sample and the throughput needed to navigate large chemical spaces. A promising development for this problem is the use of automated, high-throughput, label-free techniques that can probe reaction chemistry in finer detail than targeted methods, automated at both the instrument and the data-processing levels. Robust control software that is capable of translating model predictions into machine-executable tasks and workflows that provide comprehensive analysis of the molecules produced on these platforms are critical to provide information-rich datasets for ML efforts. Existing platform control networks are powerful but require specialized control-systems knowledge to implement and modify—knowledge that chemistry end-users do not typically have, making this a substantial barrier to entry. For ML-enhanced HTE platforms to be broadly accessible, there must be serious consideration of the operational design. As the authors conclude: "The potential to quickly generate tailormade datasets with ML-enhanced HTE represents a promising path toward accurate models with broad capabilities that can be systematically created on demand." This work requires a close collaboration between domain researchers and data scientists but is an area that has a great deal of promise and potential. Open Reaction Database, https://docs.open-reaction-database.org Toward Machine Learning-Enhanced High-Throughput ExperimentationEyke et al.Trends in ChemistryJanuary 2, 2021In BriefRecent literature suggests that the fields of machine learning (ML) and high-throughput experimentation (HTE) have separately received considerable attention from chemists and engineers, leading to the development of powerful reactivity models and platforms capable of rapidly performing thousands of reactions. The merger of ML with HTE presents a wealth of opportunities for the exploration of chemical space, but the integration of the two has yet to be fully realized. We highlight examples of recent developments in ML and HTE that collectively suggest the utility of their integration. Full-Text PDF
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
酷波er应助Aspirin采纳,获得10
1秒前
现代鸣凤发布了新的文献求助10
3秒前
钱念波发布了新的文献求助10
4秒前
雪白的雪完成签到,获得积分10
5秒前
明月落乌江完成签到,获得积分10
7秒前
越红完成签到,获得积分10
7秒前
搜集达人应助Fan_采纳,获得10
8秒前
Lucas应助Chenq1nss采纳,获得10
8秒前
量子星尘发布了新的文献求助10
8秒前
11秒前
11秒前
12秒前
13秒前
miketyson完成签到,获得积分10
14秒前
14秒前
唔昂wang发布了新的文献求助20
16秒前
wang发布了新的文献求助10
17秒前
17秒前
17秒前
孤海未蓝发布了新的文献求助10
18秒前
平陵发布了新的文献求助10
18秒前
19秒前
Sunrise完成签到,获得积分10
20秒前
21秒前
Aspirin发布了新的文献求助10
23秒前
哈哈哈完成签到,获得积分10
23秒前
大力荷花发布了新的文献求助10
24秒前
L.C.发布了新的文献求助10
25秒前
26秒前
充电宝应助躺躺采纳,获得10
28秒前
蘑菇屋应助L.C.采纳,获得10
29秒前
王磊发布了新的文献求助20
32秒前
ding应助平陵采纳,获得10
36秒前
李爱国应助梦灵采纳,获得10
36秒前
隐形曼青应助ylh采纳,获得10
38秒前
孤独的猕猴桃完成签到,获得积分10
39秒前
41秒前
42秒前
BLDC888发布了新的文献求助10
42秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
T/CIET 1202-2025 可吸收再生氧化纤维素止血材料 500
Comparison of adverse drug reactions of heparin and its derivates in the European Economic Area based on data from EudraVigilance between 2017 and 2021 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3952529
求助须知:如何正确求助?哪些是违规求助? 3497949
关于积分的说明 11089475
捐赠科研通 3228442
什么是DOI,文献DOI怎么找? 1784930
邀请新用户注册赠送积分活动 868992
科研通“疑难数据库(出版商)”最低求助积分说明 801309