Very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in cloud computing environment

Big data video images contain a lot of information in the cloud computing environment. There are usually many images in the same scene, and the information description is not sufficient. The traditional image fusion algorithms have some defects such as poor quality, low resolution and information lo...

Full description

Saved in:
Bibliographic Details
Published in:Evolving systems Vol. 14; no. 2; pp. 281 - 293
Main Authors: Chen, Tong, Yang, Juan
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.04.2023
Subjects:
ISSN:1868-6478, 1868-6486
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Big data video images contain a lot of information in the cloud computing environment. There are usually many images in the same scene, and the information description is not sufficient. The traditional image fusion algorithms have some defects such as poor quality, low resolution and information loss of the fused image. Therefore, we propose a very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in the cloud computing environment. This proposed network is based on VGG-Net and designs the encoder sub-network and the decoder sub-network. The images to be fused are decomposed by the wavelet transform to obtain the low frequency sub-image and high frequency sub-image at different scale spaces. The different fusion schemes for low frequency sub-band coefficient and high frequency sub-band coefficient are given respectively. The structural similarity of the images before and after fusion is taken as the objective orientation. By introducing the weight factor of the local information in the image, the loss function suitable for the final fusion of the image is customized. The fusion image can take the effective information of the different input images into account. Compared with other state-of-the-art image fusion methods, the proposed image fusion has achieved significant improvement in both subjective visual experience and objective quantification indexes.
ISSN:1868-6478
1868-6486
DOI:10.1007/s12530-022-09457-x