Skip to main content
Video s3
    Details
    Presenter(s)
    Yisong Kuang Headshot
    Display Name
    Yisong Kuang
    Affiliation
    Affiliation
    Peking University
    Country
    Author(s)
    Display Name
    Yisong Kuang
    Affiliation
    Affiliation
    Peking University
    Display Name
    Xiaoxin Cui
    Affiliation
    Affiliation
    Peking University
    Display Name
    Chenglong Zou
    Affiliation
    Affiliation
    Peking University
    Display Name
    Yi Zhong
    Affiliation
    Affiliation
    Peking University
    Display Name
    Zhenhui Dai
    Affiliation
    Affiliation
    Peking University
    Display Name
    Zilin Wang
    Affiliation
    Affiliation
    Institute of Microelectronics, Peking University
    Display Name
    Kefei Liu
    Affiliation
    Affiliation
    Peking University
    Display Name
    Dunshan Yu
    Affiliation
    Affiliation
    Peking University
    Display Name
    Yuan Wang
    Affiliation
    Affiliation
    Peking University
    Abstract

    SNN is supposed to be a more energy-efficient neural network than existing ANN. To make better use of the temporal sparsity of spikes and spatial sparsity of weights in SNN, this paper presents a sparse SNN accelerator. It adopts a novel self-adaptive spike compressing and decompressing mechanism for different input spike sparsity, as well as on-chip compressed weight storage and processing. We implement the octa-core design on FPGA. The result demonstrates a peak performance of 35.84 GSOPs/s, which is equivalent to 358.4 GSOPs/s in dense SNN accelerators for 90% weight sparsity.

    Slides
    • An Event-Driven Spiking Neural Network Accelerator with On-Chip Sparse Weight (application/pdf)