Robust target detection method against weather interference based on multi-source sensor fusion[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2025.09.30.003
Citation: Robust target detection method against weather interference based on multi-source sensor fusion[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2025.09.30.003

Robust target detection method against weather interference based on multi-source sensor fusion

  • Robust object detection under adverse weather conditions remains a pressing challenge in autonomous driving and intelligent transportation, as single-sensor systems are prone to performance degradation in rain, fog, or snow. To address this issue, we propose SeparateFusion, a novel multi-sensor fusion framework that integrates 4D millimeter-wave radar and LiDAR data through a deep neural network. By exploiting radar’s resilience to weather interference and LiDAR’s high spatial resolution, SeparateFusion delivers accurate and stable perception across diverse environments. The architecture comprises two key modules: the Geometry–Semantic Enhancement (GSE) encoder for early 3D fusion, and the BEV Feature Enhancement Module (BMM) for 2D feature refinement. In the first stage, LiDAR and radar point clouds are independently projected into a shared pillar grid, ensuring spatial alignment. The GSE encoder enhances geometric and semantic information of each modality separately—geometric features capture structural layouts from point coordinates, while semantic features encode attributes such as intensity, Doppler velocity, and reflectivity. Following this enhancement, pillar-level features are extracted, enabling early-stage multi-modal fusion that aligns and preserves modality-specific advantages. In the second stage, the fused features are transformed into a bird’s-eye view (BEV) representation. The BMM module then processes this representation using the MambaMixer structure to capture both local and long-range dependencies in the spatial domain. Additionally, a gating mechanism is applied to suppress redundant or noisy signals, allowing the network to focus on discriminative information for detection. This two-stage design provides a balance between fine-grained geometry–semantic modeling in 3D space and high-level spatial reasoning in BEV space, contributing to strong robustness against weather-related degradation. Extensive experiments on the View-of-Delft (VoD) dataset show that our method consistently outperforms both state-of-the-art single-sensor detectors and existing multi-sensor fusion approaches, achieving 70.8% mean Average Precision (mAP) across the entire test area and 85.46% within the driving corridor, demonstrating notable gains in both global and lane-focused detection scenarios. Furthermore, additional evaluations on a fog-simulation dataset confirm that SeparateFusion maintains clear advantages over previous methods in low-visibility conditions, indicating strong generalization capability. Ablation studies further validate the contributions of the GSE encoder and BMM module, showing that removing either component results in a significant drop in detection accuracy, which highlights the complementary nature of early 3D geometry–semantic enhancement and later-stage BEV feature gating. In summary, SeparateFusion introduces a structured two-stage fusion approach for integrating radar and LiDAR data, incorporating both early geometry–semantic enhancement and later-stage BEV refinement with adaptive gating. The method achieves significant improvements over both powerful single-sensor and existing fusion-based object detection methods under challenging weather, providing a promising foundation for next-generation all-weather intelligent perception systems that must operate reliably in safety-critical scenarios.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return