Besides a model that turns 2D pictures into 3D objects, NVIDIA researchers also cooked up a deep learning based model that can automatically generate dance moves for music videos. NVIDIA says they trained this deep learning model with the help of about 71 hours of dancing footage. Full details over here.
To help automatically create a dance video, NVIDIA researchers in collaboration with University of California, Merced developed a deep learning-based model that can automatically compose new dance moves that are diverse, style-consistent, and match the beat.
“This is a challenging but interesting generative task with the potential to assist and expand content creations in arts and sports, such as a theatrical performance, rhythmic gymnastics, and figure skating,” the NVIDIA researchers stated in a paper presented this week at the 2019 Conference on Neural Information Processing Systems (NeurIPS 2019) in Vancouver, Canada.
After the NeurIPS 2019 conference, NVIDIA will post the source code and models on GitHub.