Adverse weather conditions, such as fog, degrade image quality and affect the performance of deep learning-based image processing algorithms, whereas advanced driver assistance systems (ADASs) urgently demand image clarity and large-field-of-view perception in foggy environments. Existing image dehazing methods rarely consider the non-uniform and dense distribution of particles in fog, leading to severe attenuation of background information. Image stitching, owing to the low-brightness and low-texture characteristics of ADAS scenarios and differences between sensors, faces challenges such as difficult feature point extraction and matching and poor stitching quality. To address these issues, this study proposes a non-uniform dehazing method based on Deformable Convolution v4 (DCNv4), designing a DCNv4-based transform-like network to achieve long-range dependence and adaptive spatial aggregation, combined with a lightweight Retinex-inspired Transformer for color correction and structure refinement. Meanwhile, a multi-plane scale constraint module is introduced based on the LightGlue feature matching network to improve matching accuracy and homography matrix estimation precision, and an adaptive fusion stitching method is adopted to eliminate artifacts and transition zones. Experimental results show that the proposed method effectively improves feature matching accuracy and homography matrix calculation precision, achieving Peak Signal-to-Noise Ratios (PSNRs) of 22.78 dB and 24.34 dB on the NH-HAZE and BRAS datasets, respectively, which are superior to those of existing methods. This provides a reliable environmental perception solution for autonomous driving in foggy environments, verifying its effectiveness and practicality.
Keywords: deep learning; deformable convolution; feature matching; image dehazing; image stitching.