HidingGAN: High Capacity Information Hiding with Generative Adversarial Network

Image steganography is the technique of hiding secret information within images. It is an important research direction in the security field. Benefitting from the rapid development of deep neural networks, many steganographic algorithms based on deep learning have been proposed. However, two problem...

Full description

Saved in:
Bibliographic Details
Published in:Computer graphics forum Vol. 38; no. 7; pp. 393 - 401
Main Authors: Wang, Zihan, Gao, Neng, Wang, Xin, Xiang, Ji, Zha, Daren, Li, Linghui
Format: Journal Article
Language:English
Published: Oxford Blackwell Publishing Ltd 01.10.2019
Subjects:
ISSN:0167-7055, 1467-8659
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Image steganography is the technique of hiding secret information within images. It is an important research direction in the security field. Benefitting from the rapid development of deep neural networks, many steganographic algorithms based on deep learning have been proposed. However, two problems remain to be solved in which the most existing methods are limited by small image size and information capacity. In this paper, to address these problems, we propose a high capacity image steganographic model named HidingGAN. The proposed model utilizes a new secret information preprocessing method and Inception‐ResNet block to promote better integration of secret information and image features. Meanwhile, we introduce generative adversarial networks and perceptual loss to maintain the same statistical characteristics of cover images and stego images in the high‐dimensional feature space, thereby improving the undetectability. Through these manners, our model reaches higher imperceptibility, security, and capacity. Experiment results show that our HidingGAN achieves the capacity of 4 bits‐per‐pixel (bpp) at 256 × 256 pixels, improving over the previous best result of 0.4 bpp at 32 × 32 pixels.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.13846