Details
Presenter(s)
Display Name
Yisong Kuang
- Affiliation
-
AffiliationPeking University
- Country
Abstract
SNN is supposed to be a more energy-efficient neural network than existing ANN. To make better use of the temporal sparsity of spikes and spatial sparsity of weights in SNN, this paper presents a sparse SNN accelerator. It adopts a novel self-adaptive spike compressing and decompressing mechanism for different input spike sparsity, as well as on-chip compressed weight storage and processing. We implement the octa-core design on FPGA. The result demonstrates a peak performance of 35.84 GSOPs/s, which is equivalent to 358.4 GSOPs/s in dense SNN accelerators for 90% weight sparsity.