Robust Multi-Focus Image Fusion Using Multi-Task Sparse Representation and Spatial Context

IEEE Trans Image Process. 2016 May;25(5):2045-58. doi: 10.1109/TIP.2016.2524212. Epub 2016 Feb 3.

Abstract

We present a novel fusion method based on a multi-task robust sparse representation (MRSR) model and spatial context information to address the fusion of multi-focus gray-level images with misregistration. First, we present a robust sparse representation (RSR) model by replacing the conventional least-squared reconstruction error by a sparse reconstruction error. We then propose a multi-task version of the RSR model, viz., the MRSR model. The latter is then applied to multi-focus image fusion by employing the detailed information regarding each image patch and its spatial neighbors to collaboratively determine both the focused and defocused regions in the input images. To achieve this, we formulate the problem of extracting details from multiple image patches as a joint multi-task sparsity pursuit based on the MRSR model. Experimental results demonstrate that the suggested algorithm is competitive with the current state-of-the-art and superior to some approaches that use traditional sparse representation methods when input images are misregistered.

Publication types

  • Research Support, Non-U.S. Gov't