Objectives: Although automated glioma segmentation holds promise for objective assessment of tumor biology and response, its routine clinical use is impaired by missing sequences, for example, due to motion artifacts. The aim of our study was to develop and validate a generative adversarial network for synthesizing missing sequences to allow for a robust automated segmentation.
Materials and methods: Our model was trained on data from The Cancer Imaging Archive (n = 238 WHO II-IV gliomas) to synthesize either missing FLAIR, T2-weighted, T1-weighted (T1w), or contrast-enhanced T1w images from available sequences, using a novel tumor-targeting loss to improve synthesis of tumor areas. We validated performance in a test set from both the REMBRANDT repository and our local institution (n = 68 WHO II-IV gliomas), using qualitative image appearance metrics, but also segmentation performance with state-of-the-art segmentation models. Segmentation of synthetic images was compared with 2 commonly used strategies for handling missing input data, entering a blank mask or copying an existing sequence.
Results: Across tumor areas and missing sequences, synthetic images generally outperformed both conventional approaches, in particular when FLAIR was missing. Here, for edema and whole tumor segmentation, we improved the Dice score, a common metric for evaluation of segmentation performance, by 12% and 11%, respectively, over the best conventional method. No method was able to reliably replace missing contrast-enhanced T1w images.
Discussion: Replacing missing nonenhanced magnetic resonance sequences via synthetic images significantly improves segmentation quality over most conventional approaches. This model is freely available and facilitates more widespread use of automated segmentation in routine clinical use, where missing sequences are common.
Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.