Skip to main content
Video s3
    Details
    Presenter(s)
    Kuo-Wei Chang Headshot
    Display Name
    Kuo-Wei Chang
    Affiliation
    Affiliation
    National Chiao Tung University
    Country
    Abstract

    Hardware acceleration for dilated and transposed convolution enables real time execution of related tasks like segmentation, but current designs are specific for these convolutional types or suffer from complex control for reconfigurable designs. This paper presents a design that decomposes input or weight for dilated and transposed convolutions respectively to skip redundant computations and thus executes efficiently on existing dense CNN hardware as well. The proposed architecture can cut down 87.8% of the cycle counts to achieve 8.2X speedup over a naive execution for the ENet case.

    Slides