Skip to main content
Video s3
    Details
    Author(s)
    Display Name
    Hoichang Jeong
    Affiliation
    Affiliation
    Ulsan National Institute of Science and Technology
    Display Name
    Keonhee Park
    Affiliation
    Affiliation
    Ulsan National Institute of Science and Technology
    Display Name
    Seungbin Kim
    Affiliation
    Affiliation
    Ulsan National Institute of Science and Technology
    Display Name
    Jueun Jung
    Affiliation
    Affiliation
    Ulsan National Institute of Science and Technology
    Display Name
    KYUHO LEE
    Affiliation
    Affiliation
    UNIST
    Abstract

    A highly energy-efficient Compute-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed. Most of the previous works suffered from poor linearity of analog computing and energy-consuming ADCs. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area, and a Charge-based Psum Adder (CPA) circuit to remove system energy-consuming ADC. Furthermore, configurable data mapping enables execution of the whole convolution layers with 16x smaller bitcell memory capacity. Designed with 65nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency.