Subject motion in whole-body dynamic PET introduces inter-frame mismatch and seriously impacts parametric imaging. Traditional non-rigid registration methods are generally computationally intense and time-consuming. Deep learning approaches are promising in achieving high accuracy with fast speed, but have yet been investigated with consideration for tracer distribution changes or in the whole-body scope. In this work, we developed an unsupervised automatic deep learning-based framework to correct inter-frame body motion. The motion estimation network is a convolutional neural network with a combined convolutional long short-term memory layer, fully utilizing dynamic temporal features and spatial information. Our dataset contains 27 subjects each under a 90-min FDG whole-body dynamic PET scan. Evaluating performance in motion simulation studies and a 9-fold cross-validation on the human subject dataset, compared with both traditional and deep learning baselines, we demonstrated that the proposed network achieved the lowest motion prediction error, obtained superior performance in enhanced qualitative and quantitative spatial alignment between parametric Ki and Vb images, and significantly reduced parametric fitting error. We also showed the potential of the proposed motion correction method for impacting downstream analysis of the estimated parametric images, improving the ability to distinguish malignant from benign hypermetabolic regions of interest. Once trained, the motion estimation inference time of our proposed network was around 460 times faster than the conventional registration baseline, showing its potential to be easily applied in clinical settings.
Keywords: Convolutional network; Long-short term memory; Motion correction; Parametric imaging; Whole-body dynamic PET.
Copyright © 2022. Published by Elsevier B.V.