From future handheld units to plug-ins for The Foundry, dealing with 400GB per second, and ambitions to get its lightfield technology into the hands of as many people as possible, including TV, Lytro’s Jon Karafin provides an update on all things Lightfield.
When Lytro debuted its Cinema Camera in April it wasn’t just the potential paradigm shift in how we conceive of working with video which astounded, it was also that the camera is the size of a small car. You’d be forgiven for thinking this isn’t exactly for run and gun style shooting.
But the Silicon Valley optical specialist hasn’t built the world’s first commercial professional grade lightfield camera on a budget of $50 million without thinking of the mass market.
“We are already working towards the next generation of lightfield camera which will be handheld and portable,” revealed Jon Karafin, Head of Lightfield Video at Lytro during a presentation at the IBC Conference. “The [current Cinema version] is a first generation development platform to ensure we get the technology right.”
The company has form here, having started out with a consumer still image lightfield camera (called Illum in its current model) and more recently launching a VR capture system called Immerge.
The company's strategy is to get lightfield tech into the hands of as many content creators as possible — and that apparently even includes TV producers.
Lytro is in advanced test stages of its Cinema camera which lets you adjust practically everything after the fact, including some things that are simply impossible with any other camera, Karafin said.
“Imagine you could capture a holographic image of the world,” he posed. “If you think in those terms, then anything that was previously a decision made on set and baked into the final image can now be made computationally.”
As we’ve previously covered, you can tweak focus position, depth of field, frame rates and shutter angles in post, changing those values within the same continuous shot for dramatic effect. Since the camera captures the three-dimensional depth of all objects in a scene the traditional (and limiting) green screen for conventional VFX shoots would be redundant.
“You now actually have the directionality of the pixel itself,” said Karafin. “You have angular information, and you effectively have a completely virtualized camera. You have the subject’s colour, the directional properties, and the exact placement in space. It becomes a truly holographic image.”
The dataset captured by Lytro’s system produces a lightfield Master that can be rendered in post into options including IMAX, RealD, Dolby Vision, ACES and HFpost. Every frame of a live action scene becomes a 3D model. Every pixel has colour plus directional and depth properties opening up new creative avenues for the integration of live action footage and VFX — at least this is what Lytro would have you believe. The session wasn’t subtitled ‘Technology that’s indistinguishable from magic’ (paraphrasing the Arthur C Clarke quote about advanced tech) for nothing.
One benefit is the ability to accurately key green screens for every object and space in the scene without the need for a green screen. Lytro is looking to create something called a 'depth screen' which allows filmmakers to capture green screen shots without actually having to place actors in green screen environments.
Lytro has also developed a plug-in for The Foundry’s software Nuke giving the raw data a route into post suites where image manipulations such as a shift in zoom or matting for virtual backlots is possible.
Another advantage is that filmmakers will be able to create versions of the same film at different speeds — 24 frames per second,48 fps or 120fps as desired for theatrical releases, or different speeds within the film for creative purposes, and at 60p for broadcast transmission all from the same source material.
“"There is a question around some films that have come out with high frame rates that have a soap opera or TV aesthetic," commented Karafin. "[With our system] you are able to achieve any look or any aesthetic by having the ability to render a 180 degree shutter, a 360 degree shutter - we can even go to a 1440 degree shutter angle which is just crazy because it is well beyond what any one frame can capture. It looks like a very wide streak of motion blur when you bake it out at that rate, but that way you'd be able to capture at exceedingly high frame rates and project it at those frame rates when the projection technology catches up or use it to -re-render it out for any other output format.”
The unit will output 4K but in order to do that, alongside computing information such as the depth of objects in a scene, Lytro has developed the highest resolution video sensor ever made at 755 Megapixels shooting 300 frames a second.
The resulting dataset is immense - as much as 400 gigabytes per second are being recorded.
“We are massively oversampling the 2D image to be able to change the optical parameters in post,” he said. “Everything that makes the image unique is retained, but you are able to rephotograph every single point in the field of view.”
It is, somewhat inevitably, a work in progress with development still required on the software algorithms, compression system and hardware.
To date, the camera needs to be physically mounted to a dolly or crane and be cabled to a server up to 100 metres away. The server itself is being designed to throughput data at speeds up to 300GB/s and will sit in a “video village” off-set and be supervised by specialist technicians.
It is being offered for rent, inclusive of those technicians, server and software, from £90,000 and is initially being targeted to work on sequences that would later involve heavy VFX work.
It is believed at least one Hollywood studio is trialling it and Lytro is targeting commercial production use in early 2017.
Lytro has incorporated a physical focus control and aperture check on the camera plus keyframe metadata to provide reassurance to the DoP that the decisions they make are preserved into post.
The very concept of lightfield is alarming to some cinematographers. “This is a disruptive technology and we need the feedback of the studio, cinematographer and VFX community to make sure that what we are creating meets their needs,” he admits.
“When the software is opened in post then whatever decision the DP or assistant or focus puller has made on set will be the same,” explained Karafin. “We are trying to be sensitive to that workflow. We provide all the tools for decisions to be locked down on set. As to who has control over final imagery – that is a studio call.”
The wider industry is taking note. Lightfield will be discussed by SMPTE at the Future of Cinema Conference, part of NAB next April. In addition, a new working group exploring lightfields and soundfields (the audio equivalent) has been established by the MPEG JPEG committee chaired by the Fraunhofer Institue (which has developed its own, similar 16-camera technology) Head of Moving Picture Technologies, Dr Siegfried Foessel.