Details
Presenter(s)
![Vinay Joshi Headshot](https://confcats-catavault.s3.amazonaws.com/CATAVault/ieeecass/master/files/styles/cc_user_photo/s3/user-pictures/23901.jpg?h=33a8ebb4&itok=QS5Xfvnt)
Display Name
Vinay Joshi
- Affiliation
-
AffiliationIBM Research - Zurich
- Country
Abstract
We propose hybrid in-memory computing (HIC) architecture for the training of DNNs on hardware accelerators that results in memory-efficient inference and outperforms baseline software accuracy in benchmark tasks. We trained the ResNet-32 network to classify CIFAR-10 images using HIC. For a comparable model size, HIC-based training outperforms baseline network, trained in floating-point 32-bit (FP32) precision, by leveraging appropriate network width multiplier. Furthermore, we observe that HIC-based training results in about 50% less inference model size to achieve baseline comparable accuracy. We show that the temporal drift in PCM devices has a negligible effect on post-training inference accuracy for extended periods.