Skip to main content
Video s3
    Details
    Poster
    Presenter(s)
    Cecilia Eugenia De la Parra Headshot
    Affiliation
    Affiliation
    Robert Bosch GmbH
    Country
    Abstract

    Approximate Computing is a promising paradigm for mitigating computational requirements of DNNs by taking advantage of their error resilience. Specifically, the use of approximate multipliers in DNN inference can lead to significant improvements in power consumption of embedded DNN applications. This paper presents a methodology for efficient approximate multiplier selection and for full and uniform approximation of large DNNs, through retraining and minimization of the approximation error. We evaluate our methodology using 422 approximate multipliers from the EvoApprox library, with three Residual architectures trained with Cifar10, and achieve energy savings of up to 58% with an accuracy loss of 0.73%.

    Slides