Skip to main content
Video s3
    Details
    Presenter(s)
    Vinay Joshi Headshot
    Display Name
    Vinay Joshi
    Affiliation
    Affiliation
    IBM Research - Zurich
    Country
    Abstract

    We propose hybrid in-memory computing (HIC) architecture for the training of DNNs on hardware accelerators that results in memory-efficient inference and outperforms baseline software accuracy in benchmark tasks. We trained the ResNet-32 network to classify CIFAR-10 images using HIC. For a comparable model size, HIC-based training outperforms baseline network, trained in floating-point 32-bit (FP32) precision, by leveraging appropriate network width multiplier. Furthermore, we observe that HIC-based training results in about 50% less inference model size to achieve baseline comparable accuracy. We show that the temporal drift in PCM devices has a negligible effect on post-training inference accuracy for extended periods.

    Slides
    • Hybrid In-Memory Computing Architecture for the Training of Deep Neural Networks (application/pdf)