Improved generative adversarial networks model for movie dance generation.

Journal: PloS one
Published Date:

Abstract

To address the challenges of innovation and efficiency in film choreography, this study proposes a dance generation model based on the generative adversarial networks. The model is trained using the AIST++ dance motion dataset, incorporating data from multiple dance styles to ensure that the generated dance sequences accurately mimic various stylistic and technical characteristics. The model integrates a music synchronization mechanism and dance structure constraints. These features ensure that the generated dance aligns seamlessly with the background music in terms of rhythm and emotional expression. Additionally, they help maintain a coherent dance structure. Experimental results demonstrate that the proposed model achieves a peak signal-to-noise ratio of 28.5 dB in dance video generation, representing an improvement of 4.2 dB over traditional methods. The structural similarity index reaches 0.83, surpassing the 0.79 achieved by conventional approaches. In a blind evaluation, 85% of professional dancers found the generated dance sequences highly consistent with the original dance styles, marking a 13% improvement over traditional methods. These findings indicate that the model effectively captures the details and fluidity of complex dance movements, providing an innovative and efficient solution for film dance generation.

Authors

  • Zhiqun Lin
    Music College of Capital Normal University, Beijing, China.
  • Kexin Feng
    Department of Breast Surgical Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.