Work Package 4: Intraoperative data fusion

Professor Tom Vercauteren, Professor Danail Stoyanov, Professor David Hawkes, Professor Adrien Desjardins

About this work package

Description is below the image.

Visualisation interface for intraoperative guidance, showing direct video feed from the fetoscope alongside the extended field-of-view mosaic and pre-operative MRI images.

This work package focuses on providing enhanced real-time feedback during fetal therapy by extending the capabilities of fetoscopic imaging and combining direct vision with preoperatively acquired information on maternal and other relevant structures. To achieve this, good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicking. Subsequently, a number of computer vision techniques can be used to expand the available field of view by sticking together adjacent image frames into a larger mosaic. This work package explores different solutions for reliable fetoscope calibration and real-time mosaicking.  

Work package tasks

Imaging systems and probe calibration

Fetoscopy is a minimally invasive procedure that allows observation and intervention within the amniotic sac during pregnancy. The fetoscope is inserted through the uterus and is immersed in amniotic fluid. Fluid has a strong influence on the image formation process due to refraction at the interface of the fetoscopic lens which is determined by the optical properties of the amniotic medium. Accurate calibration is critical to vision-based methods for providing image-guided surgery and real-time information from the surgical site. It consists of recording images of a calibration target of known geometric pattern in order to estimate optical properties of a camera. We have explored two ways to achieve effective pre-calibration of fetoscopes. The first uses a computer vision method to calculate fluid-immersed camera parameters that can compensate for the optical properties of amniotic fluid as well as radial distortion effects based on a dry calibration. The second is a calibration target for use with fluid-immersed endoscopes that allows for sterility-preserving optical distortion calibration of endoscopes within a few minutes. The target can be used in combination with endocal, a cross-platform, lightweight, compact GUI application for optical distortion calibration and display of the live distortion-corrected endoscopic video stream in real time. 

Fetoscopic surgical vision
Figure 1. Description is in the caption.

Mosaic obtained by probabilistic visual and electromagnetic data fusion on three different datasets: a) synthetic, (b) phantom-based, and (c) ex vivo human placenta images.

This task focuses on extracting higher level information from the surgical site using the fetoscopic video. By applying methods for detection, tracking and structure reconstruction in this task we build mosaics from fetoscopic videos in order to expand the field-of-view while coping with deformations and low image quality and sudden jerky movements of the scope and devices or physiological motion of the anatomy. Automatic detection of placental wall vessels or other visible structures and instruments and predictive tracking algorithms are used to enhance the visualisation of important structures during the procedure. Methods from this task are also used to provide control signals for automated control strategies for the robotic surgical tools and scope. 

Success stories

Key publications

Deep learning-based fetoscopic mosaicking for field-of-view expansionBano, S., Vasconcelos, F., Tella-Amo, M., Dwyer, G., Gruijthuijsen, C., Vander Poorten, E., Vercauteren, T., Ourselin, S., Deprest, J. and Stoyanov, D. (2020). International Journal of Computer Assisted Radiology and Surgery, pp.1-10. Video presentation.

FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos. Bano, S., Vasconcelos, F., Vander Poorten, E., Vercauteren, T., Ourselin, S., Deprest, J. and Stoyanov, D. (2020). International Journal of Computer Assisted Radiology and SurgeryVideo presentation. 

Refractive Two-View Reconstruction for Underwater 3D Vision. Chadebecq, F., Vasconcelos, F., Lacher, R., Maneas, E., Desjardins, A., Ourselin, S., Vercauteren, T. and Stoyanov, D. (2019).  International Journal of Computer Vision, pp.1-17. doi: 10.1007/s11263-019-01218-9

Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopyTella-Amo, M., Peter, L., Shakir, D. I., Deprest, J., Stoyanov, D., Iglesias, J. E., Vercauteren, T., Ourselin, S. (2018).  Journal of Medical Imaging, 5(2) 021217. doi: 10.1117/1.JMI.5.2.021217 

Retrieval and registration of long-range overlapping frames for scalable mosaicking of in vivo fetoscopy. Peter, L., Tella-Amo, M., Shakir, D.I. et al. (2018). International Journal of Computer Assisted Radiology and Surgery, 13, 713–720. doi: 10.1007/s11548-018-1728-4