Skip to main content
    Details
    Author(s)
    Display Name
    Weidong Cao
    Affiliation
    Affiliation
    Washington University in St. Louis
    Display Name
    Xuan Zhang
    Affiliation
    Affiliation
    Washington University in St. Louis
    Abstract

    Compute-in-memory (CIM) has demonstrated great promise in accelerating numerous deep-learning tasks. However, existing analog CIM (ACIM) accelerators unavoidably suffer from frequent and energy-intensive analog-to-digital (A/D) conversions, severely limiting their performance and energy efficiency. In this paper, we propose A/D Alleviator, an energy-efficient mechanism to reduce A/D conversions in ACIM accelerators with an augmented analog accumulation data flow. To make it, switched-capacitor-based multiplication and accumulation circuits are leveraged to connect the bitlines (BLs) of memory crossbar arrays and the final A/D conversion stage. In this way, analog partial sums can be accumulated both spatially across all adjacent BLs that store high-precision weights and temporally across all input cycles before the final quantization, thereby minimizing the need for explicit A/D conversions. Evaluations demonstrate that A/D Alleviator can improve energy efficiency by $4.9\times$ and $1.9\times$ with a high signal-to-noise ratio, as compared to state-of-the-art ACIM accelerators.