Skip to main content
Video s3
    Details
    Poster
    Presenter(s)
    Jialiang Tang Headshot
    Display Name
    Jialiang Tang
    Affiliation
    Affiliation
    Southwest University of Science and Technology
    Country
    Country
    China
    Abstract

    Convolutional neural networks(CNNs) are often over-parameterized and cannot apply to existing resource-limited artificial intelligence(AI) devices. Some methods are proposed to model compress the CNNs, but these methods are data-driven and often unable when lacking data. To solve this problem, in this paper, we propose a data-free model compression and acceleration method based on generative adversarial networks and network pruning(named DFNP), which can train a compact neural network only needs a pre-trained neural network. The DFNP consists of the source network, generator, and target network. First, the generator will generate the pseudo data under the supervise of the source network. Then the target network will get by pruning the source network and use these generated data for training. And the source network will transfer knowledge to the target network to promote the target network to achieve a similar performance of the source network. When the VGGNet-19 is select as the source network, the target network trained by DFNP contains only 25% parameters and 65% calculations of the source network. Still, it retains 99.4% accuracy on the CIFAR-10 dataset without any real data.

    Slides
    • Data-Free Network Pruning for Model Compression (application/pdf)