About this work package
Visualisation interface for intraoperative guidance, showing direct video feed from the fetoscope alongside the extended field-of-view mosaic and pre-operative MRI images.
This work package focuses on providing enhanced real-time feedback during fetal therapy by extending the capabilities of fetoscopic imaging and combining direct vision with preoperatively acquired information on maternal and other relevant structures. To achieve this, good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicking. Subsequently, a number of computer vision techniques can be used to expand the available field of view by sticking together adjacent image frames into a larger mosaic. This work package explores different solutions for reliable fetoscope calibration and real-time mosaicking.
Work package tasks
Imaging systems and probe calibration
Fetoscopy is a minimally invasive procedure that allows observation and intervention within the amniotic sac during pregnancy. The fetoscope is inserted through the uterus and is immersed in amniotic fluid. Fluid has a strong influence on the image formation process due to refraction at the interface of the fetoscopic lens which is determined by the optical properties of the amniotic medium. Accurate calibration is critical to vision-based methods for providing image-guided surgery and real-time information from the surgical site. It consists of recording images of a calibration target of known geometric pattern in order to estimate optical properties of a camera. We have explored two ways to achieve effective pre-calibration of fetoscopes. The first uses a computer vision method to calculate fluid-immersed camera parameters that can compensate for the optical properties of amniotic fluid as well as radial distortion effects based on a dry calibration. The second is a calibration target for use with fluid-immersed endoscopes that allows for sterility-preserving optical distortion calibration of endoscopes within a few minutes. The target can be used in combination with endocal, a cross-platform, lightweight, compact GUI application for optical distortion calibration and display of the live distortion-corrected endoscopic video stream in real time.
Fetoscopic surgical vision
This task focuses on extracting higher level information from the surgical site using the fetoscopic video. By applying methods for detection, tracking and structure reconstruction in this task we build mosaics from fetoscopic videos in order to expand the field-of-view while coping with deformations and low image quality and sudden jerky movements of the scope and devices or physiological motion of the anatomy. Automatic detection of placental wall vessels or other visible structures and instruments and predictive tracking algorithms are used to enhance the visualisation of important structures during the procedure. Methods from this task are also used to provide control signals for automated control strategies for the robotic surgical tools and scope.
- The paper titled ‘Deep Learning-based Fetoscopic Mosaicking for Field-of-View Expansion’ published in MICCAI2019 IJCARS Special Issue received the best paper award at the MICCAI2020 Conference on 7 October 2020. This paper reports a novel approach for the fetoscopic video mosaicking developed by the GIFT-Surg team. Watch a video presentation of the research and read the news story on the UCL website.
On 4 August 2020, Dr Sophia Bano was an invited speaker at the Artificial Intelligence in Surgery –Wellcome / EPSRC Centre for Interventional and Surgical Sciences UCL mini-symposium. Her talk was entitled Mosaicking using Deep Learning in Fetoscopic Surgery. Read about the event.
The fetoscopy video dataset was released along with the paper titled ‘Deep Placental Vessel Segmentation for Fetoscopic Mosaicking’ published in MICCAI2020. This is the first publicly available dataset of in vivo fetoscopic videos with placental vessel annotations and is acquired by leveraging the collaboration between GIFT-Surg clinical investigators at partner hospitals. Watch a video presentation of the research and download the dataset from the UCL website.
Deep learning-based fetoscopic mosaicking for field-of-view expansion. Bano, S., Vasconcelos, F., Tella-Amo, M., Dwyer, G., Gruijthuijsen, C., Vander Poorten, E., Vercauteren, T., Ourselin, S., Deprest, J. and Stoyanov, D. (2020). International Journal of Computer Assisted Radiology and Surgery, pp.1-10. Video presentation.
FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos. Bano, S., Vasconcelos, F., Vander Poorten, E., Vercauteren, T., Ourselin, S., Deprest, J. and Stoyanov, D. (2020). International Journal of Computer Assisted Radiology and Surgery. Video presentation.
Refractive Two-View Reconstruction for Underwater 3D Vision. Chadebecq, F., Vasconcelos, F., Lacher, R., Maneas, E., Desjardins, A., Ourselin, S., Vercauteren, T. and Stoyanov, D. (2019). International Journal of Computer Vision, pp.1-17. doi: 10.1007/s11263-019-01218-9
Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy. Tella-Amo, M., Peter, L., Shakir, D. I., Deprest, J., Stoyanov, D., Iglesias, J. E., Vercauteren, T., Ourselin, S. (2018). Journal of Medical Imaging, 5(2) 021217. doi: 10.1117/1.JMI.5.2.021217
Retrieval and registration of long-range overlapping frames for scalable mosaicking of in vivo fetoscopy. Peter, L., Tella-Amo, M., Shakir, D.I. et al. (2018). International Journal of Computer Assisted Radiology and Surgery, 13, 713–720. doi: 10.1007/s11548-018-1728-4