Skip to main content
Video s3
    Details
    Presenter(s)
    An Guo Headshot
    Display Name
    An Guo
    Affiliation
    Affiliation
    Southeast University
    Country
    Country
    China
    Author(s)
    Display Name
    An Guo
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Yongliang Zhou
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Bo Wang
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Tianzhu Xiong
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Chen Xue
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Yufei Wang
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Xin Si
    Affiliation
    Affiliation
    Southeast University
    Display Name
    Jun Yang
    Affiliation
    Affiliation
    National University of Defense Technology / 63rd Research Institute
    Abstract

    Compute-in-memory (CIM) has been widely explored to overcome “Von-Neumann bottleneck” for its high throughput and energy efficiency. However, recent compute-in-memory works can only support integer (INT)-type multiply-and-accumulate (MAC) operations. Floating point MACs are highly required to achieve both high performance training and high accuracy inference. In this paper, we proposed a ShareFloat CIM architecture which can support floating-point MAC (FP-MAC) operations. Neural networks with ShareFloat MAC can achieve almost the same accuracy as that with FP64 MAC. A 28nm 64Kb ShareFloat CIM macro was further implemented with an energy efficiency of 18.8 TFLOPS/W and 73.11% accuracy when applied to a VGG-16 network with ShareFloat MAC and CIFAR-100 dataset.

    Slides
    • ShareFloat CIM: A Compute-in-Memory Architecture with Floating-Point Multiply-and-Accumulate Operations (application/pdf)