Attention-Empowered Residual Autoencoder for End-to-End Communication Systems

Channel autoencoders adopt neural networks to represent and optimize previous block-driven communication systems from an end-to-end perspective. The existing deep fully connected autoencoder needs retraining when the input length of bit sequences changes since it can only handle fixed-length data. C...

Full description

Saved in:
Bibliographic Details
Published in:IEEE communications letters Vol. 27; no. 4; p. 1
Main Authors: Lu, Min, Zhou, Bin, Bu, Zhiyong
Format: Journal Article
Language:English
Published: New York IEEE 01.04.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1089-7798, 1558-2558
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Channel autoencoders adopt neural networks to represent and optimize previous block-driven communication systems from an end-to-end perspective. The existing deep fully connected autoencoder needs retraining when the input length of bit sequences changes since it can only handle fixed-length data. Convolutional neural network (CNN)-based autoencoder can accept arbitrary lengths and is widely adopted. But it has limitations in 1) error floor in high signal-to-noise ratio (SNR) regions, and 2) poor performance against interference. Therefore, we first adopt a residual attention module to enhance the representation ability of the autoencoder, where channel and spatial attention focus on finding fine-grained clues between signals and noise. An adaptive data-flow merging scheme for demapper is further proposed to satisfy dynamic environments. Simulation results show that the proposed method can achieve great gains in coded and uncoded transmission scenarios. Our method is robust and generalizable against error floor and burst interference, as compared to a conventional system.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1089-7798
1558-2558
DOI:10.1109/LCOMM.2023.3242281