NVIDIA AI turns regular video into super slow motion (video)

Posted on Monday, June 18 2018 @ 17:32 CEST by Thomas De Maesschalck
NVDA logo
Another interesting deep learning research project from NVIDIA, the firm developed a system that can turn regular 30fps video into slow motion clips of 240fps or 480fps. To achieve this feat, the NVIDIA researchers used NVIDIA Tesla V100 GPUs and a cuDNN-accelerated PyTorch deep learning framework, to analyze over 11,000 videos of everyday and sport activities shot at 240 frames per second. Once trained, the convolutional neural network was able to predict the extra frames required to turn regular video into super slow motion.
“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers wrote in the research paper. “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,” the team explained.

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”
You can check out a video of the work below. NVIDIA does not release the actual code.



About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments