The latest Robert Zemeckis and Tom Hanks film 'Here' showcases just how far virtual production technology has evolved in just a few short years.
Along with I suspect many other people, I had completely underestimated the sheer speed at which virtual production technology has evolved. When describing what an LED volume is to people, I still reference the first series of the Mandalorian, and it wasn’t until my conversation with Callum Macmillian that I realized that it was five long (for technology) years ago.
Callum Macmillan is the co-founder and CTO at Dimension, a world-leading volumetric video and 3D capture studio.
“The key thing for me is virtual production went from people thinking, ‘It’s just using an LED volume,' then 'hang on, virtual production to us is about real-time.' That's what underpins it all," he says.
"Real-time technologies can touch each point of the production process. So as the industry's evolved, our sort of addressable markets evolved, if you like, because we're able to work from script to screen, if you like. And there are lots and lots of different touch points along that production process where VP (Virtual Production), in its broadest sense, can engage audiences.”
We're talking about the release of Robert Zemeckis’ film Here, based on the graphic novel, which tells the story of one spot of land over millions of years.
While not the first film to use a single fixed camera angle, it is a world's first in how its unique concoction of technologies fuse to tell its story of a single perspective over thousands of years.
Callum was Dimension and DNEG 360's virtual production supervisor for Here.
Pre-production
“We were in a reasonably early stage to help with two things," says Macmillan. "The world outside the window was our remit. So that's what we had to deliver in terms of VP. So, everything beyond the mullions and the physical set of the window was a virtual world that we were building. So there was a lot of early development of work around that.”
A key aspect of virtual production is that most of the work is done upfront during pre-production, unlike traditional visual effects, where many decisions can be made later in post.
RS: How early did you get involved in the pre-production process?
CM: It was a good few months, but I'd say with an exponential curve, so by the time we were at Pinewood in Jan/Feb 23, it was all guns blazing in terms of building the different variations of the world and the kind of conception around things like the vehicle system that we used.
I took over a little later on when the world-building was well underway. Another key sort of early stage concept was taking the production design, the CAD files of the physical set, bringing that into Unreal, and placing that along with an approximation/pretty accurate representation of (cinematographer) Don Burgess' camera: the correct lens height and field of view and everything that was going to be there on the day.
Primarily, that exercise early on was done to line up the domains of the physical camera, the physical world and the virtual world outside the window, but doing it all virtually to ensure that we've got our alignment between the virtual and physical production. Obviously, then sense-checking what Don wanted to see through the camera.
As Dimension and DNEG 360, we'd previously worked with (VFX Supervisor) Kevin Baillie, Bob Zemeckis, and Don Burgess on Pinocchio. So we'd run a lot of simulcam systems on that as well as using interactive lighting. And then Here is where we really stepped up to say, 'Okay, let's go for final pixel quality. Let's build that world outside the window to create a sense of immersion'.
It's easy to forget that it's just a screen outside that window
Building the background
RS: While a fixed angle is great for VFX (no tracking!!), as there is practically no editing in scenes, the space becomes a more prominent character you can't edit around. While the ‘outside world’ is in the background, mainly through a window, I can see that a green screen is not the best solution, especially with the numerous required reflections of objects in the scene to make it feel like a real space.
CM: You think about all the things in the room along with the mirrors that may have a cast of light upon them that may respond in some way to that exterior environment coming in. There was all of that which would have been very challenging to do as a visual effects process that we could get in camera. Then, there's the immersion for the actors as well as the timing of things happening outside of the window in relation to the action and narrative unfolding.
RS: You mentioned before about the simulated cars used in the background. How did they work?
CM: So I think it was about 55 vehicles that were set up in a physics system in Unreal so they could interact with the curb. Around the curb and driveways, we can set lights, we can have different kinds of shaders depending on what's going on with rain and other stuff, atmospherics that was happening. And that was all on an iPad. So the first AD or Kevin could trigger the cars depending on what was sort of happening in the scene.
These were models that were built up, and there were very specific cars requested. I think Bob himself had a lot of interest in the cars. I heard some of which he might've owned himself in the past. So, you know, there was a sort of curated approach to how we design the cars. Johnny Gibson, the vision effect supervisor, was working very closely with the VAD (virtual art department) team to make sure that the shaders and the cars were the way we wanted them to be.
And it was quite challenging that, fortunately, as the camera is static, we can put some of our compute budget into upping the fidelity or trying something with a physics system like the cars, which can very quickly eat into your capability, both in terms of your performance budget and your art resource to keep iterating on all this stuff.”
RS: Did you use the physics systems for any of the weather?
CM: It's a combo. So if you use physics for rain, obviously, that will be quite compute-intensive. But if you want it for bouncing off the surface of the vehicles, that's where we kind of make it visible, and we actually do the calculations for that. But then we layer that up with cards of rain and stuff that's also more performance-friendly.
We had rain, snow, hailstorms, blizzards, wind blowing leaves around and trees, plus all kinds of seasonal stuff happening as well. So there was quite a lot of atmospherics and physics to consider, and then you've got all that kind of evolution over time as well. We hadn't had to think about that before.
The rule of thumb was to try and lock it in by the day of principal photography. But that said, Don might be looking at the environment the scene was shooting and say, 'Okay, look, let's move the lighting position'. So, move the sun's position or change the time of day slightly, particularly when there are small gaps in time between one scene and another.
So that kind of finessing was constantly happening. The car system could be manipulated, but we try to keep the car's path locked. Once we've done some kind of pre-light or reviews with the team, that's when we try and lock those bits down.
RS: What else was triggerable?
CM: We had all kinds of interactive lighting as well. So there was a lightning strike, thunderstorms, stuff like that, where we had physical lighting and were also using light probes in Unreal. We're sending RGB data to DMX. And that was being fed into RGB lights that were then firing the same kind of color values as best as possible to what we had going on and the kind of lighting intensity was matched as well. So we could trigger ourselves depending on what was needed. Physical lighting and virtual were twin.
RS: Was that kind of synchronicity between the volume lighting and external lighting common practice in virtual productions?
CM: It comes up quite a lot now. I remember when we were testing it here because of the way ‘Here’ was set up at Pinewood. We had two identical sound stages next door to each other, two LED (volume) screens, one on each stage.
So the stages were identical, which means you could set up a different scene for each one, and production could leapfrog. So I remember when we were testing it, we were triggering, you know, a lightning strike and the DMX functionality was working great. And then we got a call on the radio from the adjacent stage where principal photography was happening. And I think lighting had forgotten to disconnect the DMX control. So we actually fired off all the lights on the other set! That was a bit of an awkward one there. But no, it is a capability that you like to be able to offer.
Day-to-day schedule
RS: What was the day-to-day schedule like for your team during principal photography on Here?
CM: There was an intensity to getting each scene's elements ready. You had so many different variations of each environment that the process of just managing the source control and prepping everything into the stage, you would have a pre-call of at least two hours for our team to get the content ready. We'd have done reviews the night before, and we'd be loading that in and refining it to be ready for final looks before we shoot. And then at the other stage we would be doing the same, but ready for whenever they would be flipping over to that set for a different point in the story.
And then whilst all that was happening, we had a ton of on-set work to do. So alongside Don's camera, co-located right next to it, we had another bit of new technology, a real-time depth camera based on a laser system.
It was developed by a company called nLight. They had a system called HD3D, which gave real-time depth information from about 2 to 12 meters from the camera. So it was perfect for the distance we covered in the physical set and the LED wall for ‘Here.’
It provided a real-time stream, and they had a plug-in for Unreal Engine, so we could pick up that depth information. And at the beginning of each day, so long as we've done a calibration, and we were doing our lineup, so we'd get our camera extrinsics, our physical position of the camera, and the intrinsics, so we could do a lensing and do a sensor transform to overlay the depth information onto Don's live-action camera.
We could then begin to segment the scene in depth, which helped get a 3D surface on things in the physical set. It was helpful for checking for everything lining up. And then in visual effects, in Nuke, there was just a straight import into the depth node, where then they could play with the depth information to help with transitions and other sort of stylistic things that might be useful.
I think the other thing worth saying about the kind of onset component is when we started here, Nvidia was in the process of transitioning to the latest generation of Ada GPUs that we'd got. But you couldn't get the Ada 6000 at the time. However, based on the kind of visual fidelity we were after, we realized the previous generation A6000s wouldn't give us the frame rate we wanted.
So we actually started here using RTX 4090 GPUs rather than pro GPUs. We had all kinds of challenges with how that works in terms of off-axis projection and end display configurations. We had to think about how you genlock it and synchronize it. It's not straightforward.
We had one render head per stage, so that had a single GPU in it. Because the LED wall was ROE BP2V2 , and was about 3 1/2 by 8 meters, something like that. However, the frame rate we could get out of the 4090 meant we could get the fidelity we wanted.
And because we were using the latest generation, we could lean into deep learning tech that Nvidia was bringing out. So DLSS, super sampling, and ray reconstruction weren't out yet, but the DLSS was really important. DLAA, the anti-aliasing component, gave us a fidelity bump, and we're still able to get the frame rate we needed.
Post-production
RS: What was the process of moving from virtual production to post-production visual effects?
CM: We would always major on the LED wall. So we'd get that content to where we would want it to be. And then, for each scene, we'd get notes from Don, who might be tweaking the lighting. It might be moving something different in the scene, such as physical geometry.
Those notes would come in during some of the early takes. So you'd do a quick bit of real-time iteration. And then once they were happy because our camera was static, we had the luxury of being able to actually record that down to a hard disk recorder, thus freeing up the Unreal TD (Technical Director), to prep on that stage for the next thing that was coming up.
So if the car was triggered, once they were happy with its timing in relation to the lines being delivered, that tape could be laid off to HyperDeck, and then the HyperDeck system could playback within each stage.
We had both an Unreal render head plus a HyperDeck hard disk recorder, and we could flip-flop between the two. Once all that was done, we'd clip it up at the end of the day, and we'd make final high-quality renders for VFX later, depending on what was needed.
There'd be the dailies, which would also be rendered out for editorial, which had the wider framing.
Here and now
It’s incredible how, over a handful of years, virtual production has blossomed into a powerful new way to tell stories that would either be too cost-prohibitive or even impossible.
From intricate reflections to dynamic weather systems and curated vehicles, these remarkable details can now unfold live in front of the actors and camera, representing a welcome evolution in filmmaking. While it’s easy to forget that this is all just the curated background of the main show: the characters and their stories, it's also apt technology to tell the story of the world they inhabit outside the window.
“What you hope is that the technology we've got is just falling in the background and helping that happen and doing its job, right? That's all you want it to be."
Here is available exclusively in US cinemas now with a wider release next year.
tl;dr
- The film Here highlights the rapid advancements in virtual production technology, with a shift from simple LED volume usage to a focus on real-time technologies that integrate throughout the production process.
- Callum Macmillan and his team engaged in extensive pre-production to create a virtual world outside the physical set, allowing for extensive world-building early in the process compared to traditional visual effects.
- In the early stages, CAD files of the physical set were integrated into Unreal Engine to ensure proper alignment between the physical camera and the virtual environment, enhancing the realism of the scenes.
- The use of a fixed camera angle increases the importance of the surrounding space, pushing the boundaries of in-camera effects to create immersive backgrounds that respond to lighting and reflections naturally.
- The production incorporated simulated vehicles using Unreal's physics system, allowing real-time interaction with the environment, enhancing the narrative's dynamic feel, and supporting the actors' immersion in the scene.
Tags: Virtual Production filmmaking
Comments