Very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in cloud computing environment

Big data video images contain a lot of information in the cloud computing environment. There are usually many images in the same scene, and the information description is not sufficient. The traditional image fusion algorithms have some defects such as poor quality, low resolution and information lo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Evolving systems Jg. 14; H. 2; S. 281 - 293
Hauptverfasser: Chen, Tong, Yang, Juan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Berlin/Heidelberg Springer Berlin Heidelberg 01.04.2023
Schlagworte:
ISSN:1868-6478, 1868-6486
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Big data video images contain a lot of information in the cloud computing environment. There are usually many images in the same scene, and the information description is not sufficient. The traditional image fusion algorithms have some defects such as poor quality, low resolution and information loss of the fused image. Therefore, we propose a very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in the cloud computing environment. This proposed network is based on VGG-Net and designs the encoder sub-network and the decoder sub-network. The images to be fused are decomposed by the wavelet transform to obtain the low frequency sub-image and high frequency sub-image at different scale spaces. The different fusion schemes for low frequency sub-band coefficient and high frequency sub-band coefficient are given respectively. The structural similarity of the images before and after fusion is taken as the objective orientation. By introducing the weight factor of the local information in the image, the loss function suitable for the final fusion of the image is customized. The fusion image can take the effective information of the different input images into account. Compared with other state-of-the-art image fusion methods, the proposed image fusion has achieved significant improvement in both subjective visual experience and objective quantification indexes.
ISSN:1868-6478
1868-6486
DOI:10.1007/s12530-022-09457-x