Skip to main content
Video s3
    Details
    Presenter(s)
    Junhan Zhu Headshot
    Display Name
    Junhan Zhu
    Affiliation
    Affiliation
    School of Electronic Science and Engineering, Nanjing university
    Country
    Abstract

    In this paper, a weight compression technique named Flexible-width Bit-level (FWBL) coding is proposed to compress convolutional neural networks (CNN) models without re-training. FWBL splits the weight parameters into independent size-optimized blocks and uses just-enough bits for each block. Bit-level run-length coding is employed on high bits (HBs) to further compress the redundancy due to non-uniformly distributed weights. We implemented a configurable hardware decoder and synthesize it with TSMC 28nm technology. Results show that FWBL achieves an average compression ratio of 1.6 which is close to the Huffman coding and better than other general weight compression techniques. The decoder has a throughput of 3.7GBps running at 1.1GHz, with a power dissipation of 3.55mW, which is 17.9x and 21x better in throughput and energy efficiency compared with the prior work. Implemented in FPGA, our decoder is 3.36x and 4.96x better than various Huffman decoders in throughput and area efficiency, making it a promising weight compression technique for mobile CNN applications.

    Slides
    • Flexible-Width Bit-Level Compressor for Convolutional Neural Network (application/pdf)