Segmentation Guided Crossing Dual Decoding Generative Adversarial Network for Synthesizing Contrast-Enhanced Computed Tomography Images

IEEE J Biomed Health Inform. 2024 May 20:PP. doi: 10.1109/JBHI.2024.3403199. Online ahead of print.

Abstract

Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.