Skip to main content
Video s3
    Details
    Presenter(s)
    Wooyoung Jo Headshot
    Display Name
    Wooyoung Jo
    Affiliation
    Affiliation
    Korea Advanced Institute of Science and Technology
    Country
    Country
    South Korea
    Author(s)
    Display Name
    Wooyoung Jo
    Affiliation
    Affiliation
    Korea Advanced Institute of Science and Technology
    Display Name
    Sangjin Kim
    Affiliation
    Affiliation
    Korea Advanced Institute of Science and Technology
    Display Name
    Juhyoung Lee
    Affiliation
    Affiliation
    Korea Advanced Institute of Science and Technology
    Display Name
    Soyeon Um
    Affiliation
    Affiliation
    Korea Advanced Institute of Science and Technology
    Display Name
    Zhiyong Li
    Affiliation
    Affiliation
    KAIST
    Display Name
    Hoi-Jun Yoo
    Affiliation
    Affiliation
    Korea Advanced Institute of Science and Technology
    Abstract

    A Mixed-mode Computing-in memory (CIM) processor for supporting mixed-precision Deep Neural Network (DNN) processing is proposed. Previous CIM processors cannot exploit energy-efficient computation of mixed-precision DNNs. This paper proposes an energy-efficient mixed-mode CIM processor with two key features: 1) Mixed-Mode Mixed-precision CIM (M3-CIM) which achieves 55.46% energy efficiency improvement. 2) Digital-CIM for In-memory MAC for increasing the throughput of M3-CIM. The proposed CIM processor was simulated in 28nm CMOS technology and occupies 1.96 mm2. It achieves a state-of-the-art energy efficiency of 161.6 TOPS/W with 72.8% accuracy at ImageNet (ResNet50).

    Slides
    • A 161.6 TOPS/W Mixed-Mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks (application/pdf)