Infrared and visible image fusion using modified spatial frequency-based clustered dictionary

Infrared and visible image fusion is an active area of research as it provides fused image with better scene information and sharp features. An efficient fusion of images from multisensory sources is always a challenge for researchers. In this paper, an efficient image fusion method based on sparse...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern analysis and applications : PAA Jg. 24; H. 2; S. 575 - 589
Hauptverfasser: Budhiraja, Sumit, Sharma, Rajat, Agrawal, Sunil, Sohi, Balwinder S.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Springer London 01.05.2021
Springer Nature B.V
Schlagworte:
ISSN:1433-7541, 1433-755X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Infrared and visible image fusion is an active area of research as it provides fused image with better scene information and sharp features. An efficient fusion of images from multisensory sources is always a challenge for researchers. In this paper, an efficient image fusion method based on sparse representation with clustered dictionary is proposed for infrared and visible images. Firstly, the edge information of visible image is enhanced by using a guided filter. To extract more edge information from the source images, modified spatial frequency is used to generate a clustered dictionary from the source images. Then, non-subsampled contourlet transform (NSCT) is used to obtain low-frequency and high-frequency sub-bands of the source images. The low-frequency sub-bands are fused using sparse coding, and the high-frequency sub-bands are fused using max-absolute rule. The final fused image is obtained by using inverse NSCT. The subjective and objective evaluations show that the proposed method is able to outperform other conventional image fusion methods.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-020-00919-z