A Low-Complexity Neural Normalized Min-Sum LDPC Decoding Algorithm using Tensor-Train Decomposition
Compared with traditional low-density parity-check (LDPC) decoding algorithms, the current model-driven deep learning (DL)-based LDPC decoding algorithms face the disadvantage of high computational complexity. Based on the Neural Normalized Min-Sum (NNMS) algorithm, we propose a low-complexity model...
Gespeichert in:
| Veröffentlicht in: | IEEE communications letters Jg. 26; H. 12; S. 1 |
|---|---|
| Hauptverfasser: | , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
IEEE
01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 1089-7798, 1558-2558 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Compared with traditional low-density parity-check (LDPC) decoding algorithms, the current model-driven deep learning (DL)-based LDPC decoding algorithms face the disadvantage of high computational complexity. Based on the Neural Normalized Min-Sum (NNMS) algorithm, we propose a low-complexity model-driven DL-based LDPC decoding algorithm using Tensor-Train (TT) decomposition and syndrome loss function, called TT-NNMS+ algorithm. Our experiments show that the proposed TT-NNMS+ algorithm is more competitive than the NNMS algorithm in terms of bit error rate (BER) performance, memory requirement and computational complexity. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1089-7798 1558-2558 |
| DOI: | 10.1109/LCOMM.2022.3207506 |