Imagine a world where the camera is part of post-production. Where footage lands on an NLE as it's being shot. Where distance doesn't matter and where editors share the same virtual time and space as the director and the cinematographer
There's something in the air in this industry. As cameras and other products across the range become far more than "good enough," there's a feeling that something is about to happen: something big and yet intangible. As big, perhaps, as the transition from analog to digital. As intangible as "the cloud."
It's into this space that film industry guru Michael Cioni has catapulted himself, with a surprise move to Frame.io, to become Global SVP of Innovation.
Michael Cioni needs no introduction to RedShark readers. As SVP of Innovation and product director of Panavision's Millennium DXL 8K large format camera ecosystem, and previously as the founder of Panavision's 2015 acquisition Light Iron, he's well known for his ability to explain and clarify the increasingly abstract concepts encountered in the contemporary filmmaking industry.
I spoke to Michael just before his move - and the reasons behind it - went public.
DS: Why did you make this move now?
MC: If we waited a couple of years for cloud technologies to fully emerge, I’m afraid it would be too late. Rapid adoption and dependence on the cloud is moving so fast. That's why it's time for filmmakers to build a foundation for a camera-to-cutting room solution. It may be only a whisper right now, but the appetite for it is there and I've been listening to it coming from several directions.
DS What made you choose Frame.io?
I've known Emery Wells [Frame.io CEO] for a long time. We were both among the earliest adopters of file-based DI and 4K technology. We've always been kindred spirits, and he, too, had his own post-production facility - so we've both always had a vested interest in improving post workflows, which has led to our mutual respect and understanding.
DS: So what exactly are you aiming to do?
MC: We are going to create a new workflow solution that combines hardware and software so as you shoot takes, your editor's bins are automatically populated...anywhere in the world. Of course, there's an awful lot to solve to make this happen, and we know this road is going to force us to address numerous challenges, but I find that's often where the best innovation comes from. But anyone who sees it working with the preliminary tests we've done comments on how cohesive and natural it feels. We are finding that when people experience it for the first time their reaction is, "Why didn't we have this before?"
DS: What has to be in place before this can happen?
MC: Well, we first are focusing on core infrastructure - similar to how you need a network of charging stations before electric cars can become mainstream. In the case of camera-to-cut, it's going to rely a lot on the build-out of 5G, but not necessarily the 5G as people imagine it. 5G is much more than faster cell towers. It's a smart network that can invoke practically all forms of wireless data and functions on multiple amplitudes, which will better address distances from a hub or (especially) satellites. Over time, the technology roadmap we are deploying will eventually be able to leverage a global network that blankets us in extremely high-speed data pathways.
But before we can fully do this, we need to get the foundation right. If we can do that, then the entire dynamic of production and post-production changes.
The director, cinematographer and editor already work in a kind of triangle. But it's not an equal one: the editor generally doesn't have access to footage until well after it's been shot. This is obviously not ideal. It means that decisions have to wait. In fact, one of the things promised to creatives when switching from analog film to digital was faster review. In actuality, review timelines between film and digital capture are not that dissimilar. But with a camera-to-cloud-to-cutting room, the timeline for collaboration is drastically altered. The three main creative stakeholders can work together in an equal set-up regardless of location.
We’re also beginning to outline machine learning tools which will enable filmmakers to use Frame.io to perform a number of tasks automatically, and in some cases, instantly. Directors, cinematographers, and editors will all be able to work with recorded images in parallel, without the need for transcoding, and regardless of location.
DS: People will ask about security...
MC: Yes, and rightly so. That's a part of the stack that's already there with Frame.io. It's a cornerstone of the platform and runs right through it. Security has always been the highest priority at Frame.io and we have an excellent division looking after it. There are around a million people using Frame.io now, and over ten petabytes have been transferred. Thanks to the talent and special attention of our security team, there has never been breach of any asset.
DS: How long is this going to take?
MC: Well there's obviously work to be done, and it will need 5G to be more widespread. But we're not only motivated, we're already deploying the initial steps to this exciting workflow. But another perspective is to think about students and new filmmakers: consider those who are going to go to film school in a few years and the workflows that are already familiar to them. I predict they'll reject the idea that sound and color are separate in capturing images. I predict they will assume that editing doesn't take a whole day to begin. In addition, virtually every future director will have been an editor at one point, building empathy and experience for the craft. They are already used to editing the same footage they that comes from the camera itself, without waiting. That's learned behavior that will be crucial to their creative process and we need to evolve our existing workflow not only for our current industry, but the industry of 2030.
But this plan isn't just for the next generation. Right now, Frame.io is already shrinking the time it takes to complete a project. It's just that, now, we are in a position to make the most of the new wireless bandwidth. It's going to blur the boundary between production and post-production, and it will transform both.
We'll keep you up to date with this as it progresses.