Skip to main content
Video s3
    Details
    Presenter(s)
    Yufei Gao Headshot
    Display Name
    Yufei Gao
    Affiliation
    Affiliation
    Southwest University of Science and Technology
    Country
    Author(s)
    Display Name
    Yufei Gao
    Affiliation
    Affiliation
    Southwest University of Science and Technology
    Display Name
    Wenxin Yu
    Affiliation
    Affiliation
    Southwest University of Science and Technology
    Display Name
    Xuewen Zhang
    Affiliation
    Affiliation
    Southwest University of Science and Technology
    Display Name
    Zhiqiang Zhang
    Affiliation
    Affiliation
    Hosei University
    Display Name
    Xin Deng
    Affiliation
    Affiliation
    Southwest University of Science and Technology
    Abstract

    In this paper, we propose a learning framework based on the GAN network and multi-feature fusion strategy to learn the association between music and dance. It realizes the mapping from music to dance. The network integrates the style, beat, and structural characteristics of music, and uses two discriminators to make constraints from both style and coherence. The experimental quantitative and qualitative results show that our method can generate real, consistent style and beat-matching dance movements from music. In addition, we also collected and produced a data set containing music features and corresponding pose sequences, which is convenient for pose generation based on music features.

    Slides
    • Music to Dance: Motion Generation Based on Multi-Feature Fusion Strategy (application/pdf)