Nvidia has developed a method to convert standard video recordings into detailed slow-motion video clips. The company trained a neural network to fill in the missing frames and simulate ultra-high framerate video.
Nvidia is always looking for new ways to demonstrate the power of its GPUs in deep-learning scenarios, and the company’s latest demo has our attention. Nvidia today revealed that it developed a method to create high-quality slow-motion video from low-framerate source material using a trained neural network. The team of researchers that developed the technique plan to present their findings at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah on June 21.
Nvidia’s new AI-powered Super SloMo video system can analyze the existing frames of a 30-fps (or better) source video and automatically generate frames to fill in the gaps to produce smooth slow-motion video.
To train the system, a team of AI researches at Nvidia ran “over 11,000 videos of everyday and sports activities shot at 240 frames-per-second” through the cuDNN-accelerated PyTorch deep learning framework powered by an array of Tesla V100 GPUs, to create a prediction model that can accurately interpret and predict motion in video sequences and generate frames, to reduce the playback speed without causing jittery motion.
“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”
To reduce the playback speed of a 30-fps video by a factor of four, the AI system would need to generate 190 additional frames for each second of video. It’s hard to believe that a neural network could accurately predict that many missing frames, but that’s just the tip of the iceberg.
Nvidia’s researchers also demonstrated that its AI system could increase the framerate of existing slow-motion video to reduce its playback speed further. The team pushed a handful of slow-motion video clips from The Slow Mo Guys YouTube channel through the AI slow-motion system to create ultra-slow-motion video clips that retain the detail and smoothness of the original clips.
Genuine visual impressiveness aside, Nvidia’s slow-motion system can convert existing video into slow-motion video, but it’s not a substitute for a real slow-motion camera. The clips that the system produced of the SlowMoGuys’ videos look crisp and clear, but that’s because of the source material. Examples of standard framerate video reduced the slow-motion highlight the downside to such a technique. True slow-motion video enables you to see the details of the scene with high precision and make precise scientific measurements.But standard video slowed down using AI doesn’t allow you to see the finer details of each frame. Also, AI-generated frames obviously don’t represent actual recorded reality, so it can’t take the place of true high-framerate footage for many research tasks.
Nvidia’s AI-generated Super SloMo video system isn’t yet publicly available, and the company gave no indication as to when it would be made public.