Skip to main content
Video s3
    Details
    Presenter(s)
    Bo Wang Headshot
    Display Name
    Bo Wang
    Affiliation
    Affiliation
    Southeast University
    Country
    Country
    China
    Author(s)
    Display Name
    Bo Wang
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Chen Xue
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Han Liu
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Xiang Li
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Anran Yin
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Zhongyuan Feng
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Yuyao Kong
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Tianzhu Xiong
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Haiming Hsu
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Yongliang Zhou
    Affiliation
    Affiliation
    Southeast University
    Display Name
    An Guo
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Yufei Wang
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Jun Yang
    Affiliation
    Affiliation
    National University of Defense Technology / 63rd Research Institute
    Display Name
    Xin Si
    Affiliation
    Affiliation
    Southeast University
    Abstract

    Spiking-Neural-Networks (SNN) have natural advantages in high-speed signal processing and big data operation. However, due to the complex implementation of synaptic arrays, SNN based accelerators may face low area utilization and high energy consumption. Computing-In-Memory (CIM) shows great potential in performing intensive and high energy efficient computations. In this work, we proposed a 10T-SRAM based Spiking-Neural-Network-In-Memory architecture (SNNIM) with 28nm CMOS technology node. A compact 10T-SRAM bit-cell was developed to realize signed 5bit synapses arrays and configurable bias arrays (SYBIA). The soma array based standard 8T-SRAM (SMTA) stores the soma membrane voltage and the threshold value. A capacitance computation scheme (CCA) between them was proposed to support various SNN operations. The proposed SNNIM achieved energy efficiency of 25.18 TSyOPS/W. And the proposed SNNIM achieved 1.79+× better array efficiency compared with previous works.

    Slides
    • SNNIM: A 10T-SRAM Based Spiking-Neural-Network-in-Memory Architecture with Capacitance Computation (application/pdf)