Deep learning-based projection synthesis for low-dose cone-beam computed tomography imaging in image-guided radiotherapy

Quant Imaging Med Surg. 2024 Jan 3;14(1):231-250. doi: 10.21037/qims-23-759. Epub 2023 Nov 24.

Abstract

Background: The imaging dose of cone-beam computed tomography (CBCT) in image-guided radiotherapy (IGRT) poses adverse effects on patient health. To improve the quality of sparse-view low-dose CBCT images, a projection synthesis convolutional neural network (SynCNN) model is proposed.

Methods: Included in this retrospective, single-center study were 223 patients diagnosed with brain tumours from Beijing Cancer Hospital. The proposed SynCNN model estimated two pairs of orthogonally direction-separable spatial kernels to synthesize the missing projection in between the input neighboring sparse-view projections via local convolution operations. The SynCNN model was trained on 150 real patients to learn patterns for inter-view projection synthesis. CBCT data from 30 real patients were used to validate the SynCNN, while data from a phantom and 43 real patients were used to test the SynCNN externally. Sparse-view projection datasets with 1/2, 1/4, and 1/8 of the original sampling rate were simulated, and the corresponding full-view projection datasets were restored using the SynCNN model. The tomographic images were then reconstructed with the Feldkamp-Davis-Kress algorithm. The root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) metrics were measured in both the projection and image domains. Five experts were invited to grade the image quality blindly for 40 randomly selected evaluation groups with a four-level rubric, where a score greater than or equal to 2 was considered acceptable image quality. The running time of the SynCNN model was recorded. The SynCNN model was directly compared with the three other methods on 1/4 sparse-view reconstructions.

Results: The phantom and patient studies showed that the missing projections were accurately synthesized. In the image domain, for the phantom study, compared with images reconstructed from sparse-view projections, images with SynCNN synthesis exhibited significantly improved qualities with decreased values in RMSE and increased values in PSNR and SSIM. For the patient study, between the results with and without the SynCNN synthesis, the averaged RMSE decreased by 3.4×10-4, 10.3×10-4, and 21.7×10-4, the averaged PSNR increased by 3.4, 6.6, and 9.4 dB, and the averaged SSIM increased by 5.2×10-2, 18.9×10-2 and 33.9×10-2, for the 1/2, 1/4, and 1/8 sparse-view reconstructions, respectively. In expert subjective evaluation, both the median scores and acceptance rates of the images with SynCNN synthesis were higher than those reconstructed from sparse-view projections. It took the model less than 0.01 s to synthesize an inter-view projection. Compared with the three other methods, the SynCNN model obtained the best scores in terms of the three metrics in both domains.

Conclusions: The proposed SynCNN model effectively improves the quality of sparse-view CBCT images at a low time cost. With the SynCNN model, the CBCT imaging dose in IGRT could be reduced potentially.

Keywords: Cone-beam computed tomography (CBCT); deep learning (DL); low-dose; projection synthesis; sparse-view.