Contrastive learning based method for X-ray and CT registration under surgical equipment occlusion

Comput Biol Med. 2024 Sep:180:108946. doi: 10.1016/j.compbiomed.2024.108946. Epub 2024 Aug 5.

Abstract

Deep learning-based 3D/2D surgical navigation registration techniques achieved excellent results. However, these methods are limited by the occlusion of surgical equipment resulting in poor accuracy. We designed a contrastive learning method that treats occluded and unoccluded X-rays as positive samples, maximizing the similarity between the positive samples and reducing interference from occlusion. The designed registration model has Transformer's residual connection (ResTrans), which enhances the long-sequence mapping capability, combined with the contrast learning strategy, ResTrans can adaptively retrieve the valid features in the global range to ensure the performance in the case of occlusion. Further, a learning-based region of interest (RoI) fine-tuning method is designed to refine the misalignment. We conducted experiments on occluded X-rays that contained different surgical devices. The experiment results show that the mean target registration error (mTRE) of ResTrans is 3.25 mm and the running time is 1.59 s. Compared with the state-of-the-art (SOTA) 3D/2D registration methods, our method offers better performance on occluded 3D/2D registration tasks.

Keywords: 3D/2D registration; Contrastive learning; RoI fine-tune; Surgical imaging; Transformer.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Deep Learning
  • Humans
  • Imaging, Three-Dimensional / methods
  • Surgery, Computer-Assisted / methods
  • Tomography, X-Ray Computed* / methods
  • X-Rays