Details
![Yi Sheng Chong Headshot](https://confcats-catavault.s3.amazonaws.com/CATAVault/ieeecass/master/files/styles/cc_user_photo/s3/user-pictures/10301_0.jpg?h=7bc4e76b&itok=N6WYGOBj)
- Affiliation
-
AffiliationNanyang Technological University
- Country
-
CountrySingapore
Resistive random access memory (RRAM) based computing-in-memory (CIM) is attractive for edge artificial intelligence (AI) applications thanks to its excellent energy efficiency, compactness and high parallelism in matrix vector multiplication (MatVec) operations. However, existing RRAM-based CIM designs often require complex programming scheme to finely control the RRAM cells to reach the desired resistance states so that neural network classification accuracy is maintained. This leads to large area and energy overhead as well as low RRAM area utilization. Compact RRAM-based CIM with simple pulse-based programming scheme is thus more desirable. To enable this, we propose a chip-in-the-loop training approach to compensate for the network performance drop due to the stochastic behavior of the RRAM cells. Note that although RRAM cells are targeted to only HRS and LRS (i.e. binary), their inherent analog resistance values are used in CIM operation. Our experiment using a 4-layer fully-connected binary neural network (BNN) showed that after retraining, the RRAM-based network accuracy can be recovered, regardless of the RRAM resistance distribution and R_HRS / R_LRS ratio.