强化学习
弹道
钻探
随钻测量
导线
计算机科学
平面图(考古学)
演习
过程(计算)
模拟
人工智能
工程类
机械工程
地质学
古生物学
物理
大地测量学
天文
操作系统
作者
Narendra Vishnumolakala,Vivek Kesireddy,Sheelabhadra Dey,Eduardo Gildin,Enrique Z. Losoya
摘要
Abstract The efficiency of modern drilling operations depends on the planning phase to determine possible well trajectories and the ability of the directional driller to traverse them accurately. Deviations from the planned trajectory while drilling often require updates to the original well plan, involving drilling engineers and rig personnel, which can be time-consuming due to several uncertainties, such as formation tendencies, survey measurement inaccuracy, or estimation errors. To address these challenges, this paper proposes an innovative solution that leverages artificial intelligence (AI) methods, specifically deep reinforcement learning (DRL) to dramatically reduce the need for continuous corrections to the well plan while drilling. In the DRL paradigm, the proposed approach eliminates the need for constant plan adjustments by training a drilling agent to imitate the driller's ability to dynamically adjust the well trajectory in real time based on information from previous drilling logs, well plans, and near-the-bit measurements. This research utilizes a physics-based simulation engine to model a directional drilling environment as a Markov Decision Process (MDP). The MDP is intended for an autonomous system based on geological data models and real-time measurements obtained while drilling (MWD) to train deep reinforcement learning (DRL) agents to drill directional wellbores that maintain maximum contact with the target formation. The simulator incorporates uncertainties of the real-world drilling environment, such as bit walk, formation properties, and drilling speed which, combined, help derive the actions performed by the drilling agent. Our findings reveal that using the proposed methodology in a virtual drilling environment, the drilling agent effectively navigates well paths in the presence of uncertainties and successfully tackle challenges such as avoiding excessive tortuosity and doglegs while maximizing contact with the target formation. Furthermore, the use of domain randomization during training enabled the RL agents to exhibit exceptional generalizability to a wide range of drilling scenarios through the random selection of drilling test sites from a set of unseen sites during training — demonstrating the ability of the agent to adapt and reach the target formation even when the initial well-plan is inaccurate — resulting in a 90% success rate. This self-correcting approach demonstrates the potential for automated, proactive, self-contained steering operations with minimal human involvement. The developed simulation framework is a pioneering approach to enhancing real-time adjustments of drilling well paths using reinforcement learning and optimizing drilling operations. It is the first-of-its-kind method to augment drillers’ ability to navigate through drilling uncertainties and maximize pay zone contact and pave the way for the creation of robust, scalable, and practical autonomous drilling systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI