Optimizing Write Fidelity of MRAMs by Alternating Water-Filling Algorithm

Magnetic random-access memory (MRAM) is a promising memory technology due to its high density, non-volatility, and high endurance. However, achieving high memory fidelity incurs high write-energy costs, which should be reduced for large-scale deployment of MRAMs. In this paper, we formulate a biconv...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on communications Vol. 70; no. 9; pp. 5825 - 5836
Main Authors: Kim, Yongjune, Jeon, Yoocharn, Choi, Hyeokjin, Guyot, Cyril, Cassuto, Yuval
Format: Journal Article
Language:English
Published: New York IEEE 01.09.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0090-6778, 1558-0857
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Magnetic random-access memory (MRAM) is a promising memory technology due to its high density, non-volatility, and high endurance. However, achieving high memory fidelity incurs high write-energy costs, which should be reduced for large-scale deployment of MRAMs. In this paper, we formulate a biconvex optimization problem to optimize write fidelity given energy and latency constraints. The basic idea is to allocate non-uniform write pulses depending on the importance of each bit position. The fidelity measure we consider is mean squared error (MSE), for which we optimize write pulses via alternating convex search (ACS). We derive analytic solutions and propose an alternating water-filling algorithm by casting the MRAM's write operation as communication over parallel channels. Hence, the proposed alternating water-filling algorithm is computationally more efficient than the original ACS while their solutions are identical. Since the formulated biconvex problem is non-convex, both the original ACS and the proposed algorithm do not guarantee global optimality. However, the MSEs obtained by the proposed algorithm are comparable to the MSEs by complicated global nonlinear programming solvers. Furthermore, we prove that our algorithm can reduce the MSE exponentially with the number of bits per word. For an 8-bit accessed word, the proposed algorithm reduces the MSE by a factor of 21. We also evaluate MNIST dataset classification supposing that the model parameters of deep neural networks are stored in MRAMs. The numerical results show that the optimized write pulses can achieve 40% write-energy reduction for the same classification accuracy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0090-6778
1558-0857
DOI:10.1109/TCOMM.2022.3190868