Skip to main content
Video s3
    Details
    Presenter(s)
    Yuxin Zhang Headshot
    Display Name
    Yuxin Zhang
    Affiliation
    Affiliation
    University of Electronic Science and Technology of China
    Country
    Abstract

    Computation-in-memory (CIM) is a feasible method to overcome “Von-Neumann bottleneck” with high throughput and energy efficiency. In this paper, we proposed a 1Mb Multi-Level (MLC) NOR Flash based CIM (MLFlash-CIM) structure with 40nm technology node. A multibit readout circuit was proposed to realize adaptive quantization, which comprises a current interface circuit, a multi-level analog shift amplifier (AS-Amp) and an 8-bit SAR-ADC. When applied to a modified VGG-16 Network with 16 layers, the proposed MLFlash-CIM can achieve 92.73% inference accuracy under CIFAR-10 dataset. This CIM structure also achieved a peak throughput of 3.277 TOPS and an energy efficiency of 35.6 TOPS/W with 4-bit multiplication and accumulation (MAC) operations.

    Slides
    • A 40nm 1Mb 35.6 TOPS/W MLC NOR-Flash Based Computation-in-Memory Structure for Machine Learning (application/pdf)