Researchers working for the Mouse have developed a groundbreaking program that delivers automated edits from multi-camera footage based on cinematic criteria.
Imagine opening an editing session and being greeted by an unwieldy number of multi-cam streams. Now imagine that your rough edit could be a push-button, automated process that spits out a fairly credible result with logical cuts that follow the action, and even varies image size.
Depending on your outlook (and present duties), such an innovation would either be a wonderful time-saver or a frightening job-killer. Either way, a group at Disney Research, in conjunction with researchers at Pittsburgh's Carnegie Mellon University and the Interdisciplinary Center Herzliya in Israel, have demoed a system that's the stuff of your dreams/nightmares.
The system works by approximating the 3D space of the cameras in relation to each other. The algorithm determines the "3D joint attention," or the likely center of activity, through an on-the-fly analysis of the multiple camera views. Based on this information, the algorithm additionally takes into account a set of cinematic preferences, such as adherence to the 180 degree rule, avoidance of jump cuts, varying shot size and zoom, maintaining minimum and maximum shot lengths, and cutting on action. The result is a very passable, almost human edit.
Check out the video from Disney Research showcasing the technology below. If you're technically-minded, here's the group's research paper (in PDF) that delves into mathematics behind this innovative new utility.
Tags: Post & VFX
Comments