BEMD分解和W变换相结合的红外与可见光图像融合
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Infrared and visible image fusion based on BEMD and W-transform
  • 作者:宫睿 ; 王小春
  • 英文作者:Gong Rui;Wang Xiaochun;College of Sciences,Beijing Forestry University;
  • 关键词:红外图像 ; 可见光图像 ; W变换 ; 2维经验模态分解(BEMD) ; W-BEMD多尺度分解算法 ; 图像融合
  • 英文关键词:infrared image;;visible image;;W-transform;;bidimensional empirical mode decomposition(BEMD);;W-BEMD multi-scale decomposition method;;image fusion
  • 中文刊名:ZGTB
  • 英文刊名:Journal of Image and Graphics
  • 机构:北京林业大学理学院;
  • 出版日期:2019-06-16
  • 出版单位:中国图象图形学报
  • 年:2019
  • 期:v.24;No.278
  • 基金:国家自然科学基金项目(61571046)~~
  • 语种:中文;
  • 页:ZGTB201906014
  • 页数:13
  • CN:06
  • ISSN:11-3758/TB
  • 分类号:145-157
摘要
目的针对传统的基于多尺度变换的图像融合算法的不足,提出了一种基于W变换和2维经验模态分解(BEMD)的红外与可见光图像融合算法。方法首先,为了更有效地提取图像的高频信息,抑制BEMD中存在的模态混叠现象,提出了一种基于W变换和BEMD的新的多尺度分解算法(简称W-BEMD);然后,利用W-BEMD对源图像进行塔式分解,获得图像的高频分量WIMFs和残差分量WR;接着,对源图像对应的WIMFs分量和WR分量分别采用基于局部区域方差选择与加权和基于局部区域能量选择与加权的融合规则进行融合,得到融合图像的W-BEMD分解;最后,通过W-BEMD逆变换得到最终融合图像。W-BEMD分解算法的主要思想是通过W变换递归地将BEMD分解过程中每层所得低频分量中滞留的高频成分提取出来并叠加到相应的高频分量中,实现更有效的图像多尺度分解。结果对比实验结果表明,本文方法得到的融合图像视觉效果更佳,既有突出的红外目标,又有清晰的可见光背景细节,而且在平均梯度(AG)、空间频率(SF)、互信息(MI) 3个客观评价指标上也有显著优势。结论本文提出了一种新的红外与可见光图像融合算法,实验结果表明,该算法具有较好的融合效果,在保留可见光图像中的细节信息和突出红外图像中的目标信息方面更加有效。
        Objective Infrared and visible image fusion is an important problem in the field of image fusion which has been applied widely in military,security,and surveillance areas. Infrared imaging is based on the thermal radiation of scene,and it is not susceptible to weather and illumination. But infrared images are rather vague as a whole,and the spatial resolution and image contrast of infrared images are low. In contrast,visible imaging is based on the reflection of visible light.The spatial resolution of visible images is higher,and they have clear texture information and abundant details. However,they are vulnerable to the interference of illumination and climatic conditions. Therefore,infrared and visible images of the same scene exhibit a large difference and complementary information. Because of the redundancy and complementary,image fusion can accurately describe the object by effectively combining the target characteristics in infrared image and the details of the scene in visible image. Multi-scale techniques,including wavelet transform and multi-scale geometric decomposition,are widely used in image fusion. Empirical mode decomposition( EMD) and W-transform are two such tools.EMD is a fully data-driven time-frequency analysis method that adaptively decomposes signals into intrinsic mode functions( IMFs) and has shown considerable prowess in the analysis of non-stationary data. W-transform is a new orthogonal transform that has a strong decomposability and reconstruction capability for continuous and discontinuous information and can characterize the local variation of images effectively. In view of the deficiency of traditional multi-scale transform-based image fusion algorithms,this study proposes a new infrared and visible image fusion method based on W-transform and bidimensional empirical mode decomposition( BEMD). Method The proposed method is applied on registered infrared and visible images with the same spatial resolution. To eliminate the modal aliasing phenomenon in EMD,a new decomposition method called W-BEMD,which is based on BEMD and W-transform,is proposed. The main idea of the W-BEMD method is performing W-transform on the low-frequency components of each level in the BEMD decomposition process and superimposing the obtained high-frequency component into the corresponding IMFs of the same decomposition level. W-BEMD is an improved BEMD method that can effectively extract high-frequency information and suppress the frequency aliasing effect in BEMD. W-BEMD is further applied on infrared and visible image fusion to achieve satisfactory fused results. First,the registered infrared and visible captures of the same scene are decomposed into the high-frequency component WIMFs and the residual component WR through W-BEMD. Second,the corresponding WIMFs of the same decomposition level of the source images are fused using the weighted average fusion rule on the basis of the local area variance to obtain fused WIMF images,whereas the weighted average strategy based on area energy is adopted for the fusion of the residual component WR. Finally,the fused image is generated by adding the fused WIMF images and fused residual component. Result Decomposition experiments are conducted to evaluate the effect of W-BEMD,and they show that the high-frequency part under W-BEMD contains more complete edge information compared with the one under BEMD. Simulation experiments on four groups of infrared and visible images are conducted to verify the superiority and validity of the proposed fusion method.Three objective evaluation indices,including mean gradient,spatial frequency,and mutual information,are also employed to evaluate the fusion results quantitatively. The fusion experiment results show that the proposed method outperforms the other five methods in terms of objective assessment and subjective visual quality. Visually,the proposed method not only preserves the rich scene information of the visible light image but also effectively highlights the hot target information in the infrared image. The fused results of the proposed method have high contrast,rich edge details,and remarkable target information,and they are obviously better than the results generated by the other five methods. Objectively,the proposed algorithm achieves the best average gradient and space frequency and is almost superior to the other compared algorithms in the index of mutual information. Conclusion A new fusion method for infrared and visible images based on BEMD and W-transform is proposed. According to the characteristic of the W-BEMD of the source images,we design different fusion rules for different frequency bands. Four groups of infrared and visible images are employed for the performance evaluation of the proposed method. Analysis shows that the proposed algorithm is more effective in preserving details in the visible images and highlighting target information in the infrared images compared with the other algorithms.
引文
[1]Zhu Z Q,Chai Y,Yin H P,et al.A novel dictionary learning approach for multi-modality medical image fusion[J].Neurocomputing,2016,214:471-482.[DOI:10.1016/j.neucom.2016.06.036]
    [2]Upla K P,Joshi M V,Gajjar P P.An edge preserving multiresolution fusion:use of contourlet transform and MRF prior[J].IEEE Transactions on Geoscience and Remote Sensing,2015,53(6):3210-3220.[DOI:10.1109/TGRS.2014.2371812]
    [3]Burt P J,Adelson E.The Laplacian pyramid as a compact image code[J].IEEE Transactions on Communications,1983,31(4):532-540.[DOI:10.1109/TCOM.1983.1095851]
    [4]Ali S S,Riaz M M,Ghafoor A.Fuzzy logic and additive waveletbased panchromatic sharpening[J].IEEE Geoscience and Remote Sensing Letters,2014,11(1):357-360.[DOI:10.1109/LGRS.2013.2258321]
    [5]Do M N,Vetterli M.The contourlet transform:an efficient directional multiresolution image representation[J].IEEE Transactions on Image Processing,2005,14(12):2091-2106.[DOI:10.1109/TIP.2005.859376]
    [6]Da Cunha A L,Zhou J,Do M N.The nonsubsampled contourlet transform:theory,design,and applications[J].IEEE Transactions on Image Processing,2006,15(10):3089-3101.[DOI:10.1109/TIP.2006.877507]
    [7]Kong W,Lei Y,Lei Y,et al.Fusion technique for grey-scale visible light and infrared images based on non-subsampled contourlet transform and intensity-hue-saturation transform[J].IETSignal Processing,2011,5(1):75-80.[DOI:10.1049/iet-spr.2009.0263]
    [8]Huang N E,Shen Z,Long S R,et al.The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis[J].Proceedings of the Royal Society A:Mathematical,Physical and Engineering Sciences,1998,454(1971):903-995.[DOI:10.1098/rspa.1998.0193]
    [9]Zheng Y Z,Qin Z.Medical image fusion algorithm based on bidimensional empirical mode decomposition[J].Journal of Software,2009,20(5):1096-1105.[郑有志,覃征.基于2维经验模态分解的医学图像融合算法[J].软件学报,2009,20(5):1096-1105.][DOI:10.3724/SP.J.1001.2009.03542]
    [10]Xu X G,Chong Y,Jin X,et al.Gradient constrained bi-dimensional empirical mode decomposition and its application[C]//Proceedings of the 8th International Congress on Image and Signal Processing.Shenyang,China:IEEE,2015:929-933.[DOI:10.1109/CISP.2015.7408011]
    [11]Gao F J,He Y,Tian X Y,et al.Image compression method based on image feature points in BEMD[J].Techniques of Automation and Applications,2012,31(8):16-19.[高凤娇,何艳,田晓英,等.基于BEMD图像特征点的图像压缩方法[J].自动化技术与应用,2012,31(8):16-19.][DOI:10.3969/j.issn.1003-7241.2012.08.006]
    [12]Huang S Q,Zhang Y C,Liu Z.Image feature extraction and analysis based on empirical mode decomposition[C]//Proceedings of 2016 IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference.Xi'an,China:IEEE,2016:615-619.[DOI:10.1109/IMCEC.2016.7867283]
    [13]Chen Z,Luo S,Xie T,et al.A novel infrared small target detection method based on BEMD and local inverse entropy[J].Infrared Physics&Technology,2014,66:114-124.[DOI:10.1016/j.infrared.2014.05.013]
    [14]Yang Z J,Ling B W K,Bingham C.Joint empirical mode decomposition and sparse binary programming for underlying trend extraction[J].IEEE Transactions on Instrumentation and Measurement,2013,62(1):2673-2682.[DOI:10.1109/TIM.2013.2265451]
    [15]Liu G,Li L,Gong H,et al.Multisource remote sensing imagery fusion scheme based on bidimensional empirical mode decomposition(BEMD)and its application to the extraction of bamboo forest[J].Remote Sensing,2017,9(1):#19.[DOI:10.3390/rs9010019]
    [16]Alshawi T A,El-Samie F E A,Alshebeili S A.Magnetic resonance and computed tomography image fusion using bidimensional empirical mode decomposition[C]//Proceedings of 2015 IEEEGlobal Conference on Signal and Information Processing.Orlando,FL,USA:IEEE,2015:413-417.[DOI:10.1109/GlobalSIP.2015.7418228]
    [17]Chen Y,Liu G,Liao J J.SAR and panchromatic images fusion algorithm based on empirical mode decomposition[J].Computer Applications and Software,2016,33(5):177-180,184.[陈云,刘广,廖静娟.基于EMD的SAR与全色影像融合算法[J].计算机应用与软件,2016,33(5):177-180,184.][DOI:10.3969/j.issn.1000-386x.2016.05.044]
    [18]Looney D,Mandic D P.Multiscale image fusion using complex extensions of EMD[J].IEEE Transactions on Signal Processing,2009,57(4):1626-1630.[DOI:10.1109/TSP.2008.2011836]
    [19]Hu G,Zheng J Y,Qin X Q.Regional feature self-adaptive image fusion method based on coordinated bidimensional empirical mode decomposition[J].Journal of Computer-Aided Design&Computer Graphics,2017,29(4):607-615.[胡钢,郑皎月,秦新强.结合局部邻域特性和C-BEMD的图像融合方法[J].计算机辅助设计与图形学学报,2017,29(4):607-615.][DOI:10.3969/j.issn.1003-9775.2017.04.005]
    [20]Ahmed M U,Mandic D P.Image fusion based on fast and adaptive bidimensional empirical mode decomposition[C]//Proceedings of the 13th International Conference on Information Fusion.Edinburgh,UK:IEEE,2010:1-6.[DOI:10.1109/ICIF.2010.5711841]
    [21]Dong W H,Li X E,Lin X G,et al.A bidimensional empirical mode decomposition method for fusion of multispectral and panchromatic remote sensing images[J].Remote Sensing,2014,6(9):8446-8467.[DOI:10.3390/rs6098446]
    [22]Chen S H,Su H B,Zhang R H,et al.Fusing remote sensing images using 1trous wavelet transform and empirical mode decomposition[J].Pattern Recognition Letters,2008,29(3):330-342.[DOI:10.1016/j.patrec.2007.10.013]
    [23]Zhu P,Huang Z H.Fusion of infrared and visible images based on BEMD and GFL[J].Journal of Optoelectronics·Laser,2017,28(10):1156-1162.[朱攀,黄战华.基于2维经验模态分解和高斯模糊逻辑的红外与可见光图像融合[J].光电子·激光,2017,28(10):1156-1162.][DOI:10.16136/j.joel.2017.10.0527]
    [24]Zhu P,Huang Z H,Lei H.Fusion of infrared and visible images based on BEMD and NSDFB[J].Infrared Physics&Technology,2016,77:82-93.[DOI:10.1016/j.infrared.2016.05.008]
    [25]Hu G,Ji X M,Liu Z,et al.Regional feature self-adaptive image fusion method based on nonsubsampled steerable pyramid transform[J].Journal of Computer-Aided Design&Computer Graphics,2012,24(5):636-648.[胡钢,吉晓民,刘哲,等.结合区域特性和非子采样SPT的图像融合方法[J].计算机辅助设计与图形学学报,2012,24(5):636-648.][DOI:10.3969/j.issn.1003-9775.2012.05.010]
    [26]Wang X C,Song R X.Discrete representation and fast algorithm of new class of orthogonal system[J].Computer Engineering and Applications,2008,44(8):40-44.[王小春,宋瑞霞.一类正交函数系的离散表示及快速变换[J].计算机工程与应用,2008,44(8):40-44.][DOI:10.3778/j.issn.1002-8331.2008.08.012]
    [27]Pan J J,Tang Y Y.A mean approximation based bidimensional empirical mode decomposition with application to image fusion[J].Digital Signal Processing,2016,50:61-71.[DOI:10.1016/j.dsp.2015.12.003]
    [28]Wang X C,Wang Y N,Sun H L,et al.Application of a hybrid orthogonal function system on trademark image retrieval[J].Journal of Advanced Mechanical Design,Systems,and Manufacturing,2014,8(6):#JAMDSM0077.
    [29]Ge W,Ji P C,Zhao T C.Infrared and visible image fusion based on NSST domain and compressed sensing[J].Laser&Infrared,2016,46(4):502-506.[葛雯,姬鹏冲,赵天臣.基于NSST和CS的红外与可见光图像融合[J].激光与红外,2016,46(4):502-506.][DOI:10.3969/j.issn.1001-5078.2016.04.024]
    [30]Naidu V P S.Discrete cosine transform-based image fusion[J].Defence Science Journal,2010,60(1):48-54.[DOI:10.14429/dsj.60.105]
    [31]Sappa A D,Carvajal J A,Aguilera C A,et al.Wavelet-based visible and infrared image fusion:a comparative study[J].Sensors,2016,16(6):861.[DOI:10.3390/s16060861]
    [32]Song R X,Yao L J,Wang X C,et al.Infrared and visible images fusion algorithm based on frequency domain[J].Laser&Infrared,2017,47(9):1174-1180.[宋瑞霞,姚立君,王小春,等.一种基于频域的红外与可见光图像融合新算法[J].激光与红外,2017,47(9):1174-1180.][DOI:10.3969/j.issn.1001-5078.2017.09.022]
    [33]Zhou Y R,Geng A H,Wang Y,et al.Contrast enhanced fusion of infrared and visible images[J].Chinese Journal of Lasers,2014,41(9):#0909001.[周渝人,耿爱辉,王莹,等.基于对比度增强的红外与可见光图像融合[J].中国激光,2014,41(9):#0909001.]