Automated Wellhead Monitoring Using Deep Learning from Multimodal Imaging
井口
计算机科学
人工智能
分割
计算机视觉
图像分割
模拟
工程类
石油工程
作者
Weichang Li,Yong Ma,Damian San Roman Alerigi
标识
DOI:10.2523/iptc-23632-ms
摘要
Abstract Wellhead growth caused by temperature and pressure effects during production can lead to severe consequences, causing well integrity failure and surface equipment damage, sometimes with catastrophic incidents at huge safety risks and economic losses. In addition, it may lead to unintended emission when pipe connections are damaged. This work develops multimodal imaging and computer vision- based methods for automated wellhead equipment health monitoring, notably wellhead displacement or growth detection and quantification. Wellhead equipment is imaged at the well site using optical and/or hyperspectral cameras if available. The captured wellhead imagery or video is then fed into a computer vision system for analysis to determine the wellhead health condition such as the amount of displacement or growth, using machine learning techniques. First a set of sample wellhead images are labeled with wellhead segmentation annotation or bounding boxes. The set of sample data are then grouped randomly into training/validation/testing subsets, according to certain partition ratio. We then construct semantic segmentation and object detection models; train these models on the training and validation subsets and then apply to testing data set for performance assessment. These trained models can then be applied to new wellsite imagery from permanent monitoring to extract wellhead equipment. The extracted wellhead equipment image is compared with the baseline wellhead image and dimension for growth detection and quantification. This removes interferences from background objects, ambient lighting variations and other non-equipment related conditions. We collected over 4000 sample well-site images that contain well-head equipment, among which we have labeled a subset of 1200 samples which are randomly partitioned into 900 training samples, 150 validation and 150 testing samples. After training and validating the Mask R-CNN model on the training and validation samples, respectively, the model is then applied on the testing samples. The training and validation performance in terms of Intersection over Union (IOU) reach 89% and 78%, respectively, and the test performance achieves 75% IOU. The segmented well equipment image is then compared with the baseline. After 2D cross-registering, we have achieved highly accurate prediction of displacement. This computer vision and image driven based approach for wellhead displacement prediction has great advantage over traditional thermos-stress model-based approaches in that it can detect displacement in real-time with high accuracy.