摘要
In 2012, the prequel to the “Alien” movie franchise, “Prometheus,” was released. In the now-infamous C-section scene, archeologist Elizabeth Shaw, played by Noomi Rapace, uses a MedPod 720i to extract a rapidly growing squid-like creature from her abdomen after unknowingly being used in an experiment. The scene beautifully depicted the interface of image-guided surgery, artificial intelligence (AI), and robotic surgery at its pinnacle. For every surgeon who saw this scene, a glimpse of what the future of surgery holds was on proud display. Image-guided surgery is not novel. We have been using transabdominal ultrasounds to guide dilation and curettage, hysteroscopic lysis of adhesions, and uterine septum repairs. Oocyte retrievals have matured from a laparoscopic-directed procedure to a transvaginal ultrasound-guided procedure that is safer and more efficient. Even embryo transfers have progressed from a blind procedure to one that is now guided by transabdominal ultrasound. Laparoscopic approaches to fibroid treatment are now using laparoscopic ultrasounds to aid in localization of fibroids intraoperatively that would otherwise be missed because of the lack of haptic feedback. All surgeons rely on preoperative imaging to help prepare them for surgery. For example, many surgeons who specialize in fibroid surgery rely on ultrasonography, sonohysterography, or magnetic resonance imaging to help them decide on the best surgical approach, gauge potential blood loss, and determine the anticipated number of fibroids that will be removed. Better imaging quality and radiologic protocols are making magnetic resonance imaging an integral part of diagnostic testing for patients with suspected endometriosis. Now, with easier and cheaper access to 3-dimensional (3D) printers, images can be converted to 3D models to help the surgeon practice their approach in the simulation laboratory before doing the surgery (1Pugliese L. Marconi S. Negrello E. Mauri V. Peri A. Gallo V. et al.The clinical use of 3D printing in surgery.Updates Surg. 2018; 70: 381-388Crossref PubMed Scopus (100) Google Scholar). In addition, more companies have invested in augmented reality, in which surgeons can use virtual reality headsets and, using preoperative images, create a simulated environment in which they can practice their movements and approach their surgery. Mercorio et al. (2Mercorio A. Zizolfi B. Barbuto S. Danzi R. Di Spiezio Sardo A. Moawad G. et al.3D imaging reconstruction and laparoscopic robotic surgery: a winning combination for a 5 complex case of multiple myomectomy.Fertil Steril. 2023; 120: 202-204Abstract Full Text Full Text PDF Scopus (1) Google Scholar) in Italy have beautifully shown us in their video a developing technology that allows 3D images of the fibroid uterus, generated from preoperative imaging, with each fibroid color coded, to be overlayed over the actual operative image during robotic surgery. The 3D images guide the surgeon to each fibroid, and because they are removed from the actual uterus, they are no longer visible in the 3D image. The belief is that this can facilitate efficient removal of all the fibroids and not rely on haptic feedback, which is limited with traditional laparoscopy and nonexistent in robotic surgery. Currently, the limit of this specific technology is that it still requires a second person to manipulate the 3D image overlay. Many companies are working on enhanced graphics and better real-time analytics to provide in-depth insights as the surgeon progresses through the surgery. Further refining this approach is the introduction of machine learning, deep learning, and computer vision to help create semiautonomous actions that can guide a surgeon because they operate on complex pathology. Training deep learning models requires an enormous number of images to help convolutional neural networks decipher the data (3Gumbs A.A. Frigerio I. Spolverato G. Croner R. Illanes A. Chouillard E. et al.Artificial intelligence surgery: how do we get to autonomous actions in surgery?.Sensors (Basel). 2021; 21: 1-18Crossref Scopus (30) Google Scholar). These data are now being stored in many institutions because surgeons capture videos of their surgeries and these are saved in the medical record system. These videos must be de-identified, annotated and eventually shared with the AI research community if image-based surgery is to advance. What can this information do? Imagine that you are doing surgery on a patient with stage IV endometriosis. As you start to enter the retroperitoneum on the left side, an image overlay starts to delineate the anatomic structures for you—the ureter is highlighted, the internal iliac, the uterine artery, and the obturator nerve. Warnings go off as you approach too closely to the adhered rectum in the cul-de-sac. When doing robotic surgery, the robot will attenuate your movements when needed and give you analytics and advice on how to approach certain pathology. The possibilities are endless. Robotic autonomy has always been a driving factor in surgical robot development. Even with the rapid development of AI, it is unlikely that we will ever have a MedPod 720i in our surgical suite, performing surgeries or oocyte retrievals and embryo transfers, in the near future. However, embracing the technology highlighted in videos such as this is how the field will progress toward the inevitable intertwining of image-guided surgery, AI, and minimally invasive surgery. Three-dimensional imaging reconstruction and laparoscopic robotic surgery: a winning combination for a complex case of multiple myomectomyFertility and SterilityVol. 120Issue 1PreviewTo demonstrate the intraoperative use of three-dimensional (3D) imaging reconstruction for a complex case of multiple myomectomy assigned to robot-assisted laparoscopic surgery. Full-Text PDF