Kandao's new software uses AI techniques to predict movement to create slow motion of up to 1200fps in post.
Designed primarily for the company's Obsidian and QooCam 360/VR cameras, the technique takes footage originated at lower frame rates, such as 30 or 60fps and uses AI predictive techniques to interpolate frames to create ultra slow motion sequences.
Not all cameras produce or are capable recording high frame rates, and it is a feature that is being demanded more and more. As the saying often goes "everything looks better in slow motion". Okay I grant you, slow motion is overused, but for high action sequences where you want to detail the sheer beauty or complexity of movement there is simply no substitute for slowing it down.
Kandao's demo is a good start. As anyone who has used previous slow motion interpolation techniques such as Optical Flow knows, nothing is infallible. These alternative techniques often produce warping artefacts around objects because the computer doesn't know one object from the next. Such methods most often get confused by soft edges and motion blur, and so it is usually best to film the original sequence and tailor it to making it into slow motion later by, for instance making sure you have nice high contrast edges and detail, as well as using a fast shutter speed so that the computer can more accurately predict motion.
It would appear that Kandao's method is better than many of the existing ones, although there are still issues of warping in some areas, and the system also becomes confused when one object moves behind another. A common issue with Optical Flow style methods as well. Still, it does show that such techniques are getting better, and with the help of AI 'learning' such systems will only get better at recognising and predicting object movement.
If ever there was a good case for equipping every camera with a depth mapping ability, this would be a prime example, allowing the post motion calculation to truly separate objects and make a much better prediction of movement.
Last year Nvidia showcased its own AI based computational slow motion system. A cursory glance would suggest that Nvidia's methods are slightly better with a bit less artefacting. Although we would have to compare the two systems using the same footage to make a proper judgement. But one thing to take into consideration is that the Nvidia system requires Tesla V100 GPUs and a cuDNN accelerated PyTorch deep learning frame work, and even Nvidia admitted that if its system gets commercialised the processing will most likely have to happen in the cloud. Kandao's system on the other hand is software only.
But what Nvidia's showcase highlights is that such a computational method isn't just good for creating slow motion in the first place. Remember, the more information that you feed a computer, the more it can do with it. And motion is no different.
Therefore if you feed such software, say, 120fps footage, you can then go ahead and create much, much slower motion footage, and any artefacts will be minimised due to the system having a lot mote temporal information by which to calculate the movement of objects. This is highlighted well in Nvidia's demonstration when liquid is smashed through a tennis racket. Anything liquid based will be a huge challenge for any computational slow motion system, but the Nvidia system handles it incredibly well.
Kandao's system is, as we've said, designed for its own range of VR and 360 cameras, but we can expect to see much more of these types of technique emerging. Will we ever get such perfect calculations that we no longer need slow motion capable cameras? The processing overhead for these methods is very high, so for now the old adage of getting it right on set would perhaps be the best advice to go for. In the future though, we can't wait to see. Find out more on the Kandao website.
Hat tip to Petapixel.