Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Apr:77:102372.
doi: 10.1016/j.media.2022.102372. Epub 2022 Jan 29.

Novel-view X-ray projection synthesis through geometry-integrated deep learning

Affiliations

Novel-view X-ray projection synthesis through geometry-integrated deep learning

Liyue Shen et al. Med Image Anal. 2022 Apr.

Abstract

X-ray imaging is a widely used approach to view the internal structure of a subject for clinical diagnosis, image-guided interventions and decision-making. The X-ray projections acquired at different view angles provide complementary information of patient's anatomy and are required for stereoscopic or volumetric imaging of the subject. In reality, obtaining multiple-view projections inevitably increases radiation dose and complicates clinical workflow. Here we investigate a strategy of obtaining the X-ray projection image at a novel view angle from a given projection image at a specific view angle to alleviate the need for actual projection measurement. Specifically, a Deep Learning-based Geometry-Integrated Projection Synthesis (DL-GIPS) framework is proposed for the generation of novel-view X-ray projections. The proposed deep learning model extracts geometry and texture features from a source-view projection, and then conducts geometry transformation on the geometry features to accommodate the change of view angle. At the final stage, the X-ray projection in the target view is synthesized from the transformed geometry and the shared texture features via an image generator. The feasibility and potential impact of the proposed DL-GIPS model are demonstrated using lung imaging cases. The proposed strategy can be generalized to a general case of multiple projections synthesis from multiple input views and potentially provides a new paradigm for various stereoscopic and volumetric imaging with substantially reduced efforts in data acquisition.

Keywords: Geometry-integrated deep learning; Projection view synthesis; X-ray imaging.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Fig. 1.
Fig. 1.
Sketch of X-ray projection imaging procedure. X-ray wave penetrates through the patient’s body and projects on the detector plane. When the X-ray source is located at different positions, different projections at different view angles are obtained. The projections in the view of anterior-posterior (AP) direction and lateral (LT) directions are shown in the figure.
Fig. 2.
Fig. 2.
Illustration of the proposed Deep Learning-based Geometry-Integrated Projection Synthesis framework (DL-GIPS). The pipeline contains texture and geometry feature encoders, projection transformation and image generator. The projection transformation contains the geometric operation of backward projection and forward projection based on the X-ray imaging physical model, and 3D feature refinement model.
Fig. 3.
Fig. 3.
Illustration of geometry transformation with backward projection (blue) and forward projection (purple). The back projector puts the pixel intensities in the source-view image back to the corresponding voxels in the 3D volume according to the cone-beam geometry of the physical model. When the X-ray source rotates to the target view angles, the forward projection operator integrates along the projection line and projects onto the detector plane.
Fig. 4.
Fig. 4.
Results of synthesized lateral projection from AP projection. Each row shows the results of one testing sample. Regions of interest are zoomed in for more clear comparison in structural details. The columns are input projection, UNet synthesized projection, ReMIC synthesized projection, DL-GIPS synthesized projection, and the ground truth projection, respectively. (Red arrows highlight the compared difference among different images.)
Fig. 5.
Fig. 5.
Results of synthesized AP projection from lateral projection. Each row shows the results of one testing sample. Regions of interest are zoomed in for more clear comparison in structural details. The columns are input projection, UNet synthesized projection, ReMIC synthesized projection, DL-GIPS synthesized projection, and the ground truth projection, respectively. (Red arrows highlight the compared difference among images.)
Fig. 6.
Fig. 6.
Results of synthesizing projections at the view angle of 30 degrees and 60 degrees from AP and lateral projections. Each row shows the results of one testing sample. Regions of interest are zoomed in for more clear comparison in structural details. The columns are the input projections, UNet synthesized projections, ReMIC synthesized projections, DL-GIPS synthesized projections, and the ground truth projections respectively. Please note that the model outputs the two target projections at 30 degrees and 60 degrees at one time. (Red arrows highlight the compared difference among images.)
Fig. 7.
Fig. 7.
Feature map visualization. We demonstrate the different features introduced in Fig. 2. The first and the second rows show the geometry feature extracted from AP view and LT view respectively. The final row shows the texture features extracted from source view as shown in Fig. 2.
Fig. 8.
Fig. 8.
Qualitative results of ablative study for synthesizing LT projection from AP projection. Each row shows results of one testing sample. Columns are synthesized projections for DL-GIPS without adversarial loss, DL-GIPS synthesized projection, and ground truth projection.

Similar articles

Cited by

References

    1. Adler J, Kohr H, Oktem O, 2017. Operator Discretization Library (ODL).
    1. Armato S III, McLennan G, Bidaut L, McNitt-Gray M, Meyer C, Reeves A, Zhao B, Aberle D, Henschke C, Hoffman E, Kazerooni E, MacMahon H, van Beek E, Yankelevitz D, Biancardi A, Bland P, Brown M, Engelmann R, Laderach G, Max D, Pais R, Qing D, Roberts R, Smith A, Starkey A, Batra P, Caligiuri P, Farooqi A, Gladish G, Jude C, Munden R, Petkovska I, Quint L, Schwartz L, Sundaram B, Dodd L, Fenimore C, Gur D, Petrick N, Freymann J, Kirby J, Hughes B, Casteele A, Gupte S, Sallam M, Heath M, Kuhn M, Dharaiya E, Burns R, Fryd D, Salganicoff M, Anand V, Shreter U, Vastagh S, Croft BY, Clarke L, 2011. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A completed reference database of lung nodules on CT scans. Medical Physics, 38: 915–931. - PMC - PubMed
    1. Armato S III, McLennan G, Bidaut L, McNitt-Gray M, Meyer C, Reeves A, Zhao B, Aberle D, Henschke C, Hoffman E, Kazerooni E, MacMahon H, van Beek E, Yankelevitz D, Biancardi A, Bland P, Brown M, Engelmann R, Laderach G, Max D, Pais R, Qing D, Roberts R, Smith A, Starkey A, Batra P, Caligiuri P, Farooqi A, Gladish G, Jude C, Munden R, Petkovska I, Quint L, Schwartz L, Sundaram B, Dodd L, Fenimore C, Gur D, Petrick N, Freymann J, Kirby J, Hughes B, Casteele A, Gupte S, Sallam M, Heath M, Kuhn M, Dharaiya E, Burns R, Fryd D, Salganicoff M, Anand V, Shreter U, Vastagh S, Croft BY, Clarke L, 2015. Data from LIDC-IDRI. The Cancer Imaging Archive - PubMed
    1. Bear DM, Fan C, Mrowca D, Li Y, Alter S, Nayebi A, Schwartz J, Fei-Fei L, Wu J, Tenenbaum JB and Yamins DL, 2020. Learning physical graph representations from visual scenes. Advances in Neural Information Processing Systems, 33, 2020.
    1. Choi Y, Choi M, Kim M, Ha J-W, Kim S, Choo J, 2018. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797.

Publication types