• 《工程索引》(EI)刊源期刊
  • 中文核心期刊(综合性理工农医类)
  • 中国科技论文统计源期刊
  • 中国科学引文数据库来源期刊

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向显微影像的多聚焦多图融合中失焦扩散效应消除方法

印象 马博渊 班晓娟 黄海友 王宇 李松岩

印象, 马博渊, 班晓娟, 黄海友, 王宇, 李松岩. 面向显微影像的多聚焦多图融合中失焦扩散效应消除方法[J]. 工程科学学报, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002
引用本文: 印象, 马博渊, 班晓娟, 黄海友, 王宇, 李松岩. 面向显微影像的多聚焦多图融合中失焦扩散效应消除方法[J]. 工程科学学报, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002
YIN Xiang, MA Bo-yuan, BAN Xiao-juan, HUANG Hai-you, WANG Yu, LI Song-yan. Defocus spread effect elimination method in multiple multi-focus image fusion for microscopic images[J]. Chinese Journal of Engineering, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002
Citation: YIN Xiang, MA Bo-yuan, BAN Xiao-juan, HUANG Hai-you, WANG Yu, LI Song-yan. Defocus spread effect elimination method in multiple multi-focus image fusion for microscopic images[J]. Chinese Journal of Engineering, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002

面向显微影像的多聚焦多图融合中失焦扩散效应消除方法

doi: 10.13374/j.issn2095-9389.2021.01.12.002
基金项目: 海南省财政科技计划资助项目(ZDYF2019009);国家自然科学基金资助项目(6210020684,61873299);中央高校基本科研业务费资助项目(00007467);佛山市科技创新专项资金项目(BK21BF002,BK19AE034,BK20AF001)
详细信息
    通讯作者:

    E-mail:hejohejo@126.com

  • 中图分类号: TP391

Defocus spread effect elimination method in multiple multi-focus image fusion for microscopic images

More Information
  • 摘要: 多聚焦图像融合是计算机视觉领域中的一个重要分支,旨在使用图像处理技术将同一场景下的聚焦不同目标的多张图像中各自的清晰区域进行融合,最终获得全清晰图像。随着以深度学习为代表的机器学习理论的突破,卷积神经网络被广泛应用于多聚焦图像融合领域,但大多数方法仅关注网络结构的改进,而使用简单的两两串行融合方式,降低了多图融合的效率,并且在融合过程中存在的失焦扩散效应也严重影响了融合结果的质量。针对上述问题,在显微成像分析的应用场景下,提出了一种最大特征图空间频率融合策略,通过在基于无监督学习的卷积神经网络中增加后处理模块,规避了两两串行融合中冗余的特征提取过程,实验证明该策略显著提高了多张图像的多聚焦图像融合效率。并且提出了一种矫正策略,在保证融合效率的情况下可有效缓解失焦扩散效应对融合图像质量的影响。

     

  • 图  1  显微成像场景中多张多聚焦图像融合技术路线(图中红色箭头为失焦扩散效应. 融合结果中的黄色虚线框为放大后的局部区域,以方便读者查看)

    Figure  1.  Flow chart of multiple multi-focus image fusion in a microscopic imaging scene (The red arrow in the figure shows the defocus spread effect. The yellow dotted line box in the fusion result is the enlarged local area, which is convenient for readers to view)

    图  2  本文方法的网络结构和执行流程。(a)为网络结构;(b)为两种多图融合策略对比(左侧为两两串行融合策略,右侧为最大特征图空间频率融合策略)

    Figure  2.  Network structure and implementation process of this method: (a) Network structure; (b) two fusion strategies (the left side is the one-by-one serial fusion strategy, and the right side is the MSFIFM strategy)

    图  3  面向显微成像场景失焦扩散效应的矫正策略流程

    Figure  3.  Flow chart of rectification strategy for the defocus spread effect in the microscopic imaging scene

    图  4  不同融合方式下芯片1、芯片2和芯片3的融合结果对比

    Figure  4.  Visualization of fusion results of chip1, chip2, and chip3 with different fusion algorithms

    表  1  MSFIFM策略与两两串行融合策略平均耗时对比

    Table  1.   Average time comparison between the MSFIFM and one-by-one fusion strategies

    Image sizeAverage time of MSFIFM strategy/sAverage time of one-by-one fusion strategy/sExecution efficiency increase/%
    900×6000.13970.264547.18
    600×4000.07320.135145.83
    300×2000.02650.039132.08
    下载: 导出CSV

    表  2  CNN Fuse、MS-Lap以及本文算法平均融合时间对比

    Table  2.   Average time comparison among CNN Fuse,MS-Lap and our method s

    Image nameAverage time of MSFIFM + rectification strategyAverage time of CNN FuseAverage time of MS−Lap
    Chip13.9248336.332196.2325
    Chip20.412672.47071.7137
    Chip31.5518347.414095.9874
    下载: 导出CSV
  • [1] Liu Y, Wang L, Cheng J, et al. Multi-focus image fusion: A Survey of the state of the art. Inf Fusion, 2020, 64: 71 doi: 10.1016/j.inffus.2020.06.013
    [2] Szeliski R. Computer vision: Algorithms and Applications. London: Springer, 2011
    [3] Zhang Y J. Image Engineering. 4th ed. Beijing: Tsinghua University Press, 2018

    章毓晋. 图像工程. 4版. 北京: 清华大学出版社, 2018
    [4] Burt P, Adelson E. The Laplacian pyramid as a compact image code. IEEE Trans Commun, 1983, 31(4): 532 doi: 10.1109/TCOM.1983.1095851
    [5] Toet A. Image fusion by a ratio of low-pass pyramid. Pattern Recognit Lett, 1989, 9(4): 245 doi: 10.1016/0167-8655(89)90003-2
    [6] Li H, Manjunath B S, Mitra S K. Multisensor image fusion using the wavelet transform. Graph Models Image Process, 1995, 57(3): 235 doi: 10.1006/gmip.1995.1022
    [7] Li S T, Kwok J T, Wang Y N. Combination of images with diverse focuses using the spatial frequency. Inf Fusion, 2001, 2(3): 169 doi: 10.1016/S1566-2535(01)00038-0
    [8] Li S T, Kang X D, Hu J W. Image fusion with guided filtering. IEEE Trans Image Process, 2013, 22(7): 2864 doi: 10.1109/TIP.2013.2244222
    [9] Zhou Z Q, Li S, Wang B. Multi-scale weighted gradient-based fusion for multi-focus images. Inf Fusion, 2014, 20: 60 doi: 10.1016/j.inffus.2013.11.005
    [10] Liu Y, Liu S P, Wang Z F. Multi-focus image fusion with dense SIFT. Inf Fusion, 2015, 23: 139 doi: 10.1016/j.inffus.2014.05.004
    [11] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436 doi: 10.1038/nature14539
    [12] Liu Y, Chen X, Peng H, et al. Multi-focus image fusion with a deep convolutional neural network. Inf Fusion, 2017, 36: 191 doi: 10.1016/j.inffus.2016.12.001
    [13] Ma, B Y, Zhu Y, Yin X, et al. SESF−Fuse: An unsupervised deep model for multi-focus image fusion. Neural Comput Appl, 2021, 33: 5793 doi: 10.1007/s00521-020-05358-9
    [14] Xu H, Ma J Y, Jiang J J, et al. U2Fusion: A unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell, 10.1109/TPAMI.2020.3012548
    [15] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs// IEEE International Conference on Computer Vision. Venice, 2017: 4724
    [16] Ma B Y, Yin X, Wu D, et al. Gradient Aware Cascade Network for Multi-Focus Image Fusion[J/OL]. ArXiv Preprint (2020-10-01) [2021-01-12]. https://arxiv.org/abs/2010.08751
    [17] Xu H, Fan F, Zhang H, et al. A deep model for multi-focus image fusion based on gradients and connected regions. IEEE Access, 2020, 8: 26316 doi: 10.1109/ACCESS.2020.2971137
    [18] Huang J, Le Z L, Ma Y, et al. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput Appl, 2020, 32(18): 15119 doi: 10.1007/s00521-020-04863-1
    [19] Wang B B. Research on Multi-Focus Image Fusion Algorithm Based on Deep Learning [Dissertation]. Kunming: Yunnan University, 2018

    王镖堡. 基于深度学习的多聚焦图像算法研究[学位论文]. 昆明: 云南大学, 2018
    [20] Ma H, Liao Q, Zhang J, et al. An α-Matte Boundary Defocus Model Based Cascaded Network for Multi-focus Image Fusion[J/OL]. ArXiv Preprint (2019-10-30) [2021-01-12]. https://arxiv.org/abs/1910.13136
    [21] He K, Wei Y, Wang Y, et al. An improved non-rigid image registration approach. Chin J Eng, 2019, 41(7): 955

    何凯, 魏颖, 王阳, 等. 一种改进的非刚性图像配准算法. 工程科学学报, 2019, 41(7):955
    [22] Chen S W, Zhang S X, Yang X G, et al. Registration of visual-infrared images based on ellipse symmetrical orientation moment. Chin J Eng, 2017, 39(7): 1107

    陈世伟, 张胜修, 杨小冈, 等. 基于椭圆对称方向矩的可见光与红外图像配准算法. 工程科学学报, 2017, 39(7):1107
    [23] Hu J, Shen L, Sun G. Squeeze-and-excitation networks//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7132
    [24] Lin T Y, Maire M, Belongie S, et al. Microsoft COCO: Common Objects in Context. Computer Vision – ECCV 2014. Cham: Springer International Publishing, 2014
    [25] Ma B Y, Yin X. The Code of SESF−Fuse for multi-focus image fusion [J/OL]. Github (2019-08-21) [2021-01-12]. https://github.com/Keep-Passion/SESF-Fuse

    马博渊, 印象. SESF−Fuse的多聚焦图像融合开源代码[J/OL]. Github (2019-08-21) [2021-01-12]. https://github.com/Keep-Passion/SESF-Fuse
    [26] Kingma D, Ba J. Adam: A method for stochastic optimization[J/OL]. ArXiv Preprint (2017-01-30) [2021-01-12]. https://arxiv.org/abs/1412.6980
    [27] Paszke A, Gross S, Massa F, et al. Py Torch: An imperative style, high-performance deep learning library[J/OL]. ArXiv Preprint (2019-12-3) [2021-01-12]. https://arxiv.org/abs/1912.01703
    [28] Mao X Y. Introduction to OpenCV3 Programming. Beijing: Electronics industry publishing house, 2015

    毛星云. OpenCV3编程入门. 北京: 电子工业出版社, 2015
    [29] Xu S, Ji L Z, Wang Z, et al. Towards reducing severe defocus spread effects for multi-focus image fusion via an optimization based strategy. IEEE Trans Comput Imaging, 2020, 6: 1561 doi: 10.1109/TCI.2020.3039564
  • 加载中
图(4) / 表(2)
计量
  • 文章访问数:  165
  • HTML全文浏览量:  77
  • PDF下载量:  20
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-01-12
  • 网络出版日期:  2021-03-01
  • 刊出日期:  2021-09-18

目录

    /

    返回文章
    返回