Details
![An Guo Headshot](https://confcats-catavault.s3.amazonaws.com/CATAVault/ieeecass/master/files/styles/cc_user_photo/s3/user-pictures/188911.jpg?h=f5f3a149&itok=wGArX1Zf)
- Affiliation
-
AffiliationSoutheast University
- Country
-
CountryChina
Compute-in-memory (CIM) has been widely explored to overcome “Von-Neumann bottleneck” for its high throughput and energy efficiency. However, recent compute-in-memory works can only support integer (INT)-type multiply-and-accumulate (MAC) operations. Floating point MACs are highly required to achieve both high performance training and high accuracy inference. In this paper, we proposed a ShareFloat CIM architecture which can support floating-point MAC (FP-MAC) operations. Neural networks with ShareFloat MAC can achieve almost the same accuracy as that with FP64 MAC. A 28nm 64Kb ShareFloat CIM macro was further implemented with an energy efficiency of 18.8 TFLOPS/W and 73.11% accuracy when applied to a VGG-16 network with ShareFloat MAC and CIFAR-100 dataset.