Skip to main content
    Details
    Author(s)
    Affiliation
    Affiliation
    Universidade Federal do Rio Grande do Sul
    Affiliation
    Affiliation
    Universidade Federal do Rio Grande do Sul
    Display Name
    Luigi Carro
    Affiliation
    Affiliation
    Universidade Federal do Rio Grande do Sul
    Abstract

    In this paper, we propose a strategy that can simultaneously reduce the amount of AD and DA conversions, while also allowing for a reduction on the number of total ReRAMs required to compute a NN. We achieve this by deploying a crossbar-friendly pruning technique, and show how one can reprogram the ReRAMs to compute the activation functions and pooling layers. Experiments on real-world Human Activity Recognition and speech recognition datasets demonstrate that our device outperforms analog and off-the-shelf digital approaches by up to 17.8x, allowing for flexibility with reduced power and high performance.