WANG Ruilin, WANG Li, HE Yingbo. Image and event fusion method based on wavelet and dynamic complementary filtering[J]. Chinese Journal of Engineering, 2024, 46(11): 2076-2084. DOI: 10.13374/j.issn2095-9389.2024.01.23.004
Citation: WANG Ruilin, WANG Li, HE Yingbo. Image and event fusion method based on wavelet and dynamic complementary filtering[J]. Chinese Journal of Engineering, 2024, 46(11): 2076-2084. DOI: 10.13374/j.issn2095-9389.2024.01.23.004

Image and event fusion method based on wavelet and dynamic complementary filtering

  • This study investigates the fusion of data from event cameras and traditional frame cameras, introducing a novel fusion approach designed to enhance image quality under complex lighting conditions. Event cameras are an innovative class of vision sensors that are known for their high temporal resolution and minimal power consumption; however, their output is often plagued by noise and feature loss. Conversely, traditional frame cameras boast commendable spatial resolution; however, they struggle to capture fast-moving scenes or scenes with a vast dynamic range. To address these challenges, the study proposes an innovative method that combines discrete wavelet transform with dynamic gain complementary filtering to fuse image and event data. The process begins by evaluating the exposure level of incoming image frames using the image entropy of a metric. Following this assessment, the discrete wavelet transform segregates the high- and low-frequency components from the event stream and frame image data. A dynamic gain complementary filter is applied to seamlessly integrate image and event data. The proposed method capitalizes on its ability to balance the contribution of each data source adaptively, thereby ensuring optimal reconstruction quality under varying conditions. By leveraging the high-frequency temporal information from event cameras and the high-resolution spatial information from frame cameras, the proposed method attempts to overcome the limitations inherent in each type of sensor. This fusion not only mitigates the noise and feature loss in event camera data but also improves the capture of high-speed movements and scenes with significant brightness variations. The efficacy of this fusion approach was rigorously tested on the HDR Hybrid Event-Frame Dataset, which includes high dynamic range and complex lighting environments in real-world scenarios. The experimental results underscored a notable improvement in image quality, outperforming traditional image reconstruction methods. Our proposed approach has demonstrated superior performance, as evidenced by its scores on several key metrics: a mean squared error of 0.0199, a structural similarity index measure of 0.90, and a Q-score of 6.07. These results not only validate the effectiveness of the proposed fusion method in enhancing imaging quality under challenging conditions but also highlight the potential of integrating disparate types of visual data to achieve superior reconstruction outcomes.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return