XBarNet: Computationally Efficient Memristor Crossbar Model Using Convolutional Autoencoder

The design and verification of memristor crossbar circuits and systems demand computationally efficient models. The conventional device-level memristor model with a circuit simulator such as simulation program with integrated circuit emphasis (SPICE) to solve a memristor crossbar is time exhaustive....

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on computer-aided design of integrated circuits and systems Vol. 41; no. 12; pp. 5489 - 5500
Main Authors: Zhang, Yuhang, He, Guanghui, Wang, Guoxing, Li, Yongfu
Format: Journal Article
Language:English
Published: New York IEEE 01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0278-0070, 1937-4151
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The design and verification of memristor crossbar circuits and systems demand computationally efficient models. The conventional device-level memristor model with a circuit simulator such as simulation program with integrated circuit emphasis (SPICE) to solve a memristor crossbar is time exhaustive. Hence, we propose a neural network-based memristor crossbar modeling method, XBarNet. By transforming memristor crossbar modeling to pixel-to-pixel regression, XBarNet avoids the iterative procedure in the conventional SPICE method, accelerating the runtime significantly. Meanwhile, XBarNet models the interconnect resistance and nonlinear <inline-formula> <tex-math notation="LaTeX">I-V </tex-math></inline-formula> effect of memristor crossbars, which minimizes the simulation errors. We first propose a feature extraction method to bridge a memristor crossbar circuit and a neural network. Then, the network based on the convolutional autoencoder architecture is developed and the filter pruning technique is applied onto XBarNet to reduce the runtime computational cost. The experimental result shows our proposed XBarNet achieves over <inline-formula> <tex-math notation="LaTeX">78\times </tex-math></inline-formula> runtime speed up and <inline-formula> <tex-math notation="LaTeX">1.7\times </tex-math></inline-formula> memory reduction with only 0.28% relative error comparing to the SPICE simulator.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0278-0070
1937-4151
DOI:10.1109/TCAD.2022.3163895