Skip to main content
Video s3
    Details
    Presenter(s)
    Hyunmyung Oh Headshot
    Display Name
    Hyunmyung Oh
    Affiliation
    Affiliation
    Pohang University of Science and Technology
    Country
    Abstract

    As Binary Neural Networks (BNNs) started to show promising performance with limited memory and computational cost, various RRAM-based in-memory BNN accelerator designs have been proposed. While a single RRAM cell can represent a binary weight, previous designs had to use two RRAM cells for a weight to enable XNOR operation between a binary weight and a binary activation. In this work, we propose to convert the XNOR-based computation to RRAM-friendly multiplication without any accuracy loss so that we can reduce the required number of RRAM cells by half. As the required number of cells to compute a BNN model is reduced, the energy and area overhead is also reduced. Experimental results show that the proposed in-memory accelerator architecture achieves ∼1.9× area efficiency improvement and ∼1.8× energy efficiency improvement over previous architectures on various image classification benchmarks.

    Slides
    • Single RRAM Cell-Based In-Memory Accelerator Architecture for Binary Neural Networks (application/pdf)