Generation of synthetic PET images of synaptic density and amyloid from 18 F-FDG images using deep learning

Med Phys. 2021 Sep;48(9):5115-5129. doi: 10.1002/mp.15073. Epub 2021 Jul 27.

Abstract

Purpose: Positron emission tomography (PET) imaging with various tracers is increasingly used in Alzheimer's disease (AD) studies. However, access to PET scans using new or less-available tracers with sophisticated synthesis and short half-life isotopes may be very limited. Therefore, it is of great significance and interest in AD research to assess the feasibility of generating synthetic PET images of less-available tracers from the PET image of another common tracer, in particular 18 F-FDG.

Methods: We implemented advanced deep learning methods using the U-Net model to predict 11 C-UCB-J PET images of synaptic vesicle protein 2A (SV2A), a surrogate of synaptic density, from 18 F-FDG PET data. Dynamic 18 F-FDG and 11 C-UCB-J scans were performed in 21 participants with normal cognition (CN) and 33 participants with Alzheimer's disease (AD). Cerebellum was used as the reference region for both tracers. For 11 C-UCB-J image prediction, four network models were trained and tested, which included 1) 18 F-FDG SUV ratio (SUVR) to 11 C-UCB-J SUVR, 2) 18 F-FDG Ki ratio to 11 C-UCB-J SUVR, 3) 18 F-FDG SUVR to 11 C-UCB-J distribution volume ratio (DVR), and 4) 18 F-FDG Ki ratio to 11 C-UCB-J DVR. The normalized root mean square error (NRMSE), structure similarity index (SSIM), and Pearson's correlation coefficient were calculated for evaluating the overall image prediction accuracy. Mean bias of various ROIs in the brain and correlation plots between predicted images and true images were calculated for ROI-based prediction accuracy. Following a similar training and evaluation strategy, 18 F-FDG SUVR to 11 C-PiB SUVR network was also trained and tested for 11 C-PiB static image prediction.

Results: The results showed that all four network models obtained satisfactory 11 C-UCB-J static and parametric images. For 11 C-UCB-J SUVR prediction, the mean ROI bias was -0.3% ± 7.4% for the AD group and -0.5% ± 7.3% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 8.1% for the AD group, and -1.3% ± 7.0% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-UCB-J DVR prediction, the mean ROI bias was -1.3% ± 7.5% for the AD group and -2.0% ± 6.9% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 9.0% for the AD group, and -1.7% ± 7.8% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-PiB SUVR image prediction, which appears to be a more challenging task, the incorporation of additional diagnostic information into the network is needed to control the bias below 5% for most ROIs.

Conclusions: It is feasible to use 3D U-Net-based methods to generate synthetic 11 C-UCB-J PET images from 18 F-FDG images with reasonable prediction accuracy. It is also possible to predict 11 C-PiB SUVR images from 18 F-FDG images, though the incorporation of additional non-imaging information is needed.

Keywords: brain PET; deep learning; image processing; multi-tracer; parametric image.

MeSH terms

  • Alzheimer Disease* / diagnostic imaging
  • Aniline Compounds
  • Brain
  • Deep Learning*
  • Fluorodeoxyglucose F18
  • Humans
  • Positron-Emission Tomography

Substances

  • Aniline Compounds
  • Fluorodeoxyglucose F18