Multi-Modal Deep Guided Filtering for Comprehensible Medical Image Processing

IEEE Trans Med Imaging. 2020 May;39(5):1703-1711. doi: 10.1109/TMI.2019.2955184. Epub 2019 Nov 22.

Abstract

Deep learning-based image processing is capable of creating highly appealing results. However, it is still widely considered as a "blackbox" transformation. In medical imaging, this lack of comprehensibility of the results is a sensitive issue. The integration of known operators into the deep learning environment has proven to be advantageous for the comprehensibility and reliability of the computations. Consequently, we propose the use of the locally linear guided filter in combination with a learned guidance map for general purpose medical image processing. The output images are only processed by the guided filter while the guidance map can be trained to be task-optimal in an end-to-end fashion. We investigate the performance based on two popular tasks: image super resolution and denoising. The evaluation is conducted based on pairs of multi-modal magnetic resonance imaging and cross-modal computed tomography and magnetic resonance imaging datasets. For both tasks, the proposed approach is on par with state-of-the-art approaches. Additionally, we can show that the input image's content is almost unchanged after the processing which is not the case for conventional deep learning approaches. On top, the proposed pipeline offers increased robustness against degraded input as well as adversarial attacks.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Image Processing, Computer-Assisted*
  • Magnetic Resonance Imaging
  • Reproducibility of Results
  • Tomography, X-Ray Computed*