Weather Degradation Perception-based Dynamically Adaptive Image Restoration[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2025.07.16.002
Citation: Weather Degradation Perception-based Dynamically Adaptive Image Restoration[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2025.07.16.002

Weather Degradation Perception-based Dynamically Adaptive Image Restoration

  • In complex weather conditions such as rain, haze, snow, and low illumination, captured images and videos often suffer from severe quality degradation. This degradation not only diminishes visual perception but also significantly reduces the utility and reliability of visual information. Such degradations impair human understanding of scenes and, more critically, pose substantial challenges to intelligent vision systems—such as autonomous driving, surveillance, and robotic perception—which rely heavily on high-quality visual input for accurate decision-making. Therefore, robust and generalizable image restoration techniques are urgently needed to ensure reliable visual understanding under adverse environmental conditions.Recently, all-in-one image restoration approaches have attracted growing attention due to their ability to handle various degradation types within a single unified framework. These methods aim to reduce model redundancy and improve generalization under diverse and complex conditions. Existing all-in-one restoration approaches can be broadly categorized into two types. The first category employs a unified network architecture to process multiple degradation types simultaneously. Although architecturally simple, these methods typically use fixed parameters, limiting their adaptability to dynamically changing degradation patterns in real-world weather. The second category adopts prompt-based learning mechanisms for degradation-aware restoration. While more flexible, such methods often fail to effectively model the complex and nonlinear relationships among degradation types, severity levels, and content-specific features. To overcome these limitations, we propose DegRestorNet, a novel all-in-one dynamically adaptive image restoration network tailored for complex and mixed weather scenarios. DegRestorNet introduces a weather degradation-aware mechanism that generates multidimensional scene descriptors capturing both degradation types and severities. These descriptors guide the restoration strategy by dynamically adjusting the convolutional operations in real time, enabling the model to flexibly adapt to degradations such as rain streaks, fog, low-light, and snowfall. The proposed DegRestorNet consists of two major modules: the Degradation-Aware Scene Descriptor Generator (DASDG) and the Dynamic Convolution-based Degradation-Adaptive Restoration Network(DARN). DASDG contains two sub-modules: a degradation type recognition module and a degradation severity estimation module. The former identifies various degradation types (e.g., rain, haze, snow, and low illumination), while the latter quantifies the severity of each type. These outputs are fused into a unified degradation-aware scene descriptor, establishing a hierarchical representation of the degradation characteristics through a decoupled and layered parsing mechanism. This descriptor offers fine-grained prior guidance for the restoration process, thereby improving both restoration quality and accuracy. The DARN module adopts an encoder-decoder architecture. In the encoder, a cross-attention mechanism is introduced to integrate scene descriptor semantics with visual features at the feature extraction stage. In the decoder, we design a degradation-aware dynamic convolutional Transformer module, which adaptively generates convolution kernels and attention weights conditioned on the scene descriptor. This allows the network to dynamically adapt to various degradation scenarios throughout the restoration process. Finally, comprehensive experimental evaluations are conducted on a complex weather degradation dataset named DSD, which is constructed based on the RAISE dataset and synthesized using the CDD image generation strategy. Compared with state-of-the-art image restoration methods, DegRestorNet achieves superior performance in restoring images affected by rain, haze, snow, and low-light conditions, while significantly reducing the number of model parameters. Ablation studies further verify the effectiveness of the proposed degradation-aware scene descriptor and the dynamic convolutional architecture, demonstrating the superiority and practical applicability of our method in real-world scenarios.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return