Skip to main content
Video s3
    Details
    Presenter(s)
    Zhen Li Headshot
    Display Name
    Zhen Li
    Affiliation
    Affiliation
    Beijing Institute of Technology
    Country
    Author(s)
    Display Name
    Su Zheng
    Affiliation
    Affiliation
    State Key Laboratory of ASIC and System, Fudan University
    Display Name
    Zhen Li
    Affiliation
    Affiliation
    Beijing Institute of Technology
    Display Name
    Yao Lu
    Affiliation
    Affiliation
    State Key Laboratory of ASIC and System, Fudan University
    Display Name
    Jingbo Gao
    Affiliation
    Affiliation
    State Key Laboratory of ASIC and System, Fudan University
    Display Name
    Jide Zhang
    Affiliation
    Affiliation
    State Key Laboratory of ASIC and System, Fudan University
    Display Name
    Lingli Wang
    Affiliation
    Affiliation
    State Key Laboratory of ASIC and System, Fudan University
    Abstract

    We propose an optimization method for the automatic design of approximate multipliers, which minimizes the average error according to the operand distributions. Our multiplier achieves up to 50.24% higher accuracy than the best reproduced approximate multiplier in DNNs, with 15.76% smaller area, 25.05% less power consumption, and 3.50% shorter minimum delay. Compared with an exact multiplier, our multiplier reduces the area, power consumption, and minimum delay by 44.94%, 47.63%, and 16.78%, respectively, with negligible accuracy losses. The tested DNN accelerator modules with our multiplier obtain up to 18.70% smaller area and 9.99% less power consumption than the original modules.

    Slides
    • HEAM: High-Efficiency Approximate Multiplier Optimization for Deep Neural Networks (application/pdf)