Skip to main content
Video s3
    Details
    Presenter(s)
    Min Liu Headshot
    Display Name
    Min Liu
    Affiliation
    Affiliation
    Peking University Shenzhen Graduate School
    Country
    Abstract

    A new convolution paradigm is proposed for convolutional neural networks (CNNs) in this paper, which can efficiently skip the storage and computation of zeros in the input feature maps, by exploring the position information of the activations. Furthermore, by reusing the activations as much as possible, load balance is achieved among different processing elements without hardware cost. With the proposed sparse convolution technique, the calculation speed is enhanced by 7.29x for convolutional layers with a sparsity of 90% and by 2.59x for running VGG16 with ImageNet2012 dataset, compared to the traditional convolution method. Implemented in a UMC 55-nm low power CMOS technology, a CNN accelerator with the proposed technique achieves an effective energy efficiency of 1.94 TOPS/W while running at 100 MHz and 1.08 V supply voltage.

    Slides
    • Efficient Zero-Activation-Skipping for On-Chip Low-Energy CNN Acceleration (application/pdf)