Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
The two possible pathways towards artificial intelligence – (i) neuroscience-oriented neuromorphic computing (like spiking neural network SNN) and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al. 2019). Dev...
Saved in:
| Published in: | Frontiers in neuroscience Vol. 14; p. 634 |
|---|---|
| Main Authors: | , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Lausanne
Frontiers Research Foundation
24.06.2020
Frontiers Media S.A |
| Subjects: | |
| ISSN: | 1662-453X, 1662-4548, 1662-453X |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The two possible pathways towards artificial intelligence – (i) neuroscience-oriented neuromorphic computing (like spiking neural network SNN) and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al. 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today’s hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimization a cost function to adjust the synaptic weights, we implement a surrogate gradient learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway towards building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 Reviewed by: Guoqi Li, Tsinghua University, China; Lyes Khacef, Université Côte d’Azur, France This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience Edited by: Kaushik Roy, Purdue University, United States |
| ISSN: | 1662-453X 1662-4548 1662-453X |
| DOI: | 10.3389/fnins.2020.00634 |