Our New Year's roundup continues with the sixth most read story from the year and we thought the new 755 megapixel, 300fps Lytro Cinema system could well be the biggest leap in video technology we have ever seen. It wasn't just about the spec, it was about what you can do with it.
The Complete Top 10 (based on page views)
1. Nvidia's GTX 1080 Is this the end of Mac-based video production?
2. Five things a Colorist would like to say to a DP
3. Is something dramatic about to happen with Final Cut Pro X?
4. This is how good Canon’s low light camera is
5. Apple could dominate the entire NLE space with the next Final Cut Pro
6. Is Lytro's new 755MP, 300fps cinema camera the biggest leap in video tech ever?
7. Sony's new PXW-Z150: the camera you should always take with you?
8. RED launches not one, but two 8K cameras
9. Sony FS5 Review: How good is Sony's latest large sensor camera?
10. Surprise new Super 8 camera from Kodak
You know that something is a really big idea when the fact that someone's about to launch a 655 megapixel camera that shoots at 300 fps is not the crucial part of the story.
And that story is that conventional two dimensional pixels may have reached the end of their natural life.
Okay. So there's a lot to take in here. You can read the press release at the end for the bald details but, for the meantime, take a deep breath, sit down, and get ready for what I think is the biggest shock to the film and TV industry quite possibly ever.
Lytro is famous for their "Lightfield" cameras. These unconventional devices distinguish themselves by creating images that can be focused after they've been taken.
This is proven technology. It's impressive to see, and only slightly less impressive was that Lytro brought this to market in the form of a camera that you can buy in the shops or on Amazon.
You don't have to understand Lightfield technology to see the benefits of it. Being able to refocus a picture after it's been taken seems quite magical, but there's sound science behind it.
The first Lytro camera had a pretty low resolution, and while it showed that the technology was credible, it wasn't even close to being useful to professional photographers.
And it very specifically didn't do video.
But what it did do was prompt a lot of people to ask whether there would ever be a Lytro video camera.
Well now there is.
The camera - it's called the Lytro Cinema Camera - is unconventional to say the least. Like Lytro's previous products, it has the unusual ability to refocus images after they've been captured, and it makes depth of field a decision for post production.
But there is one very specific reason why Lytro has built this camera. It's to make it easier to integrate live action with computer generated graphics.
Why is this so important? It's because it's very expensive to shoot on location. Not only do you have to pay the costs of being outside a studio environment, you also have to get permission. Imagine a scene that takes place outside the Whitehouse: that's not going to be easy, or cheap.
And for historical dramas, it's much easier to merge live action with computer generated scenery. It's most likely cheaper too. But it's still far from straightforward. Anything that makes this easier - even if it’s quite expensive at the moment - is probably going to save money in the long run.
I’ve known about the Lytro Cinema Camera for nearly a week, since I had a one-to-one press briefing with the company.
It was very cordial; fun, even. They were great people to talk to. But I was at a disadvantage, because they knew all about the product, and this was the first time I had heard about it.
So for the entire conversation, I was wondering whether this was merely a very, very big announcement, or possibly the biggest thing in the history of cinema.
Having thought about it for some time, I’m still undecided, but I am inclined towards the latter.
The problem is that there is really nothing to compare this to, because what Lytro is doing here is, quite literally, capturing more reality than a 2D camera ever will.
Now. it’s important to understand that when we talk about 2D pixels here, we’re not doing so in comparison with stereoscopy, which is easily achieved by using two cameras. With Lytro’s Light Field technique, you are capturing much more of reality. Instead of scenes - including focus and depth of field - being “baked in” to the image, Light Field cinematography passes much more data through to post production, to the extent, even that focusing and depth of field become post production variables. But there’s much more to it than that, and this is about much more than focus.
To understand Light Field it’s best to think about it in comparison with conventional photography.
Think of a typical lens and sensor set-up. Without the lens, all you’d get would be an average light reading for the whole scene. Light rays would arrive from all angles, and there would be no control over where they landed on the screen. It wouldn’t be possible to untangle this mess into a sharp image because there is nothing in this arrangement that captures the direction of the light beams.
Things improve a lot when you start using lenses. These ensure that at the very least, the pattern on the lens resembles the pattern of light out there in reality. You can now capture extremely good images digitally. It’s getting hard to see how you can improve the images coming from the best cameras.
But even this technique falls short of capturing the myriad of clues that light gives us before it is baked into a two dimensional array pixels
With Light Field, you move out from the sensor into the actual environment where light is bouncing from light source to object to another object. With a Light Field system, you can capture not just the intensity of light, but the direction of it too. If you know about Ray Tracing, you’ll probably understand that Light Field photography is, in a sense, like capturing a ray trace. It means that we not only know what the and luminosity of a pixel, but the vector traced by the light beam that landed on the sensor.
Lytro is understandably tight lipped about their secret sauce that makes this possible but it is very likely to be an arrangement of “Micro Lenses” close to the sensor, that capture light from multiple positions. You might think that this set-up would lead to less flexibility than more, but that’s before the missing ingredient: computing.
The images coming of the sensor/micro lens combination would make no sense whatsoever to the naked eye. But feed this data to a computer program that “understands” the arrangement, and things start to happen.
What the computer is able to do is look at the output of the pixels on the sensor and compare them with the position of the microlens through which they’ve just passed. Then it can look at other, nearby, microlenses, and compare their output. We’re guessing here to some extent but it seems likely that the computer is able to accurately calculate the direction of light from the minute differences between the spatially diverse microlenses. By comparing subtleties like phase differences and amplitude, it becomes possible to recreate the three dimensional scene out there in reality.
This leads to some very important conclusions.
First, as we’ve already seen, it’s possible to adjust focus and depth of field. This alone would be a remarkably addition to the domain of post production.
Beyond that, because it’s now possible to determine the distance of every pixel from the sensor, it’s equally possible to separate scenes in the foreground from those in the background. This is the exact, magical, characteristic that is needed to merge live action with CGI, which will no longer be at the mercy of the quality of keying against a green screen. Since this is pixel, accurate, even difficult subjects like hair or lace should key perfectly.
I was shown a perfect-looking key using the Lytro Cinema Camera where some live action had been shot in an unattractive car park, with absolutely no green screen. Without extensive manual rotoscoping, it would have been impossible to separate the foreground from the background. But the Lytro system was able to do this, convincingly.
Lytro’s way of putting this is to say that they are able to “virtualise reality”. Essentially, they’re bringing live action into the virtual domain by allowing their software to separate live action. What happens next is downright elegant.
Rather than render out a CGI background to convert it into a 2D backdrop, Lytro has written drivers and plugins for many of the well known CGI applications, that allows them to output into the Light Field domain. All the information is there in the CGI programs to do this. Once everything - live action and CGI backgrounds - is in the same conceptual space, the magic can start. CGI artists will have as much control over the Light Field domain Live Action as they do over their CGI models. Which means they can manipulate and blend them perfectly. As a bonus, because of the maximum 300fps frame rate of the Lytro Cinema Camera, it’s possible to derive any lower frame rate from the master recording - even ramping up and down on demand.
It’s hard to even begin to imagine what this will lead to in the future.
Now, you may well be wondering whether the resolution of the Light Field process is high enough to be used for cinema production. The answer is a resounding “yes”.
Lytro has thought carefully about this and they realise that if their product is to be accepted it has to not only match, but exceed the resolution of today’s digital cinema photography. That’s why they’ve produced what is probably the biggest cinema sensor in history - a whopping 755 megapixels. And as if that wasn’t mind-bending enough, they are able to capture at that resolution at 300fps. You may need to buy some new memory cards at this stage.
Except that you won’t, for two reasons.
First, while this system does exist and work very well, it is quite obvious that the back end required to capture data at this rate is gigantic. DITs for this technique are going to need more than a Macbook Pro. In fact they would probably need between fifty and a hundred of them. That’s not going to happen and is clearly not practical.
So Lytro has built their own back-end, which is, unsurprisingly, a dense mass of storage and compute in a rack (or several, for all we know).
Initially they are going to make the system available for hire from Q3 this year on for companies that can see the economies that this technique will bring. They’re also planning to move the processing to the cloud, so that users aren’t weighed down with the mass of compute that is required for this.
So does this mean that Light Field Cinema is just for the ultra high-end and not for the likes of you and me for the foreseeable future?
Yes and no, depending on how many years in the future you’re talking about.
Yes, right now, it’s a pretty monstrous rig which, while the results can be amazing, is well out of reach of all but the busiest and wealthiest production companies.
But rest assured, I was told, that it is absolutely Lytro’s intention to bring Light Field Cinema to the mainstream. This will take time, but probably less than you’d think. I’m pretty confident about this, and so are Lytro. They used a good example of how this happens.
Do you remember when Smartphones started capturing high definition video? It wasn’t very long ago: just a few years. Back then it seemed amazing that a phone could capture video so cleanly.
Now - and for more than two years, actually - we can capture 4K on our phones. That’s thirty eight megapixel frames per second. That would still seem like magic if it weren’t for the fact that we know exactly how it’s done.
Two things need to happen before we see Light Field in, say, a handheld cinema camera. First, storage needs to get smaller and cheaper by a few orders of magnitude. Light Field Raw video files are almost unimaginably big - but then too are the possibilities they create.
Second, computing power (fashionably and prosaically called “compute” these days) needs to get faster and more abundant.
You might think that this, too, would take a long time, but it probably won’t. Many of us tend to gauge increases computer speed by the clock speed of CPUs. This hasn’t increased much in the last ten years, and when it does, it needs exotic cooling.
Meanwhile, low power smartphone chips have been growing in smartness and capability. Sipping power compared to desktop processors, these things are so small than you can cheaply use hundreds of them to add up to a huge resource.
But probably more relevant to Light Field decoding are the twin technologies of GPUs and FPGAs. These are both extremely fast devices. GPUs are massively parallel and are probably ideal for Light Field calculations. FPGAs can run very sophisticated programs very fast. With an FPGA it’s like running software at hardware speeds.
In addition to this we have, on the horizon, all manner of alternative compute technologies waiting to become mainstream: even quantum computing.
So, we might have to wait a while before we can buy our own Light Field Cinema Cameras, but I don’t think it will be too long. Meanwhile, anyone going to Las Vegas at the end of next week can see this camera in action.
I think it’s fair to say that this is the biggest advance in Cinema Technology that we have ever seen. Eventually, it is going to effect everyone and everything. It’s possible that the overall reaction to this is that it’s a clever party trick but that it will never go mainstream. But that’s only going to be the case if technology improves linearly.
We all know, though, that technology improves either exponentially or hyper-exponentially. I think this will become mainstream sooner than we think, and the world of filmmaking should start making plans for it.
For those of you who remain skeptical at this stage, think about this.
Lytro could have made a video camera that made low to medium resolution video quite easily. It would have had a heavy storage requirement, and it might have had to render the video for a long time before you could see it, but they could have done it. Just like they did with their original still camera. And what happened was the majority of us looked at it, said it was really clever, and wished it had a very much higher resolution. Most of us said we’d like to see a video camera too.
So Lytro has been extremely realistic here.
Rather than create a small video camera at a relatively low price that would have been frustrating to operate, they’ve isolated a use-case where there is a strong business-case for their technology.
And then they’ve build a camera that has such an extraordinary high resolution - higher than the world has ever seen - that there is no need to even have the discussion about whether the pictures are good enough.
Having obtained incredibly high quality pictures, these images are stored in a Light Field Master, which is essentially a Light Field raw file. The raw files contain so much information that it is going to take time to even begin to comprehend the new creative and technical opportunities.
Once we get our collective heads around this - and despite all the so-far unanswered questions - I can see nothing but a very high demand for Light Field Cinematography.
Full press release after the break.
LYTRO BRINGS REVOLUTIONARY LIGHT FIELD TECHNOLOGY TO FILM AND TV PRODUCTION WITH LYTRO CINEMA
● World’s First Light Field Solution for Cinema Allows Breakthrough Creative Capabilities and
Unparalleled Flexibility on Set and in PostProduction
● First Short Produced with Lytro Cinema and Academy Award Winners Robert Stromberg, DGA
and David Stump, ASC, in Association with The Virtual Reality Company (VRC) Will Premiere at
NAB on April 19
MOUNTAIN VIEW, Calif., (April 11, 2016) – Lytro unlocks a new level of creative freedom and flexibility for filmmakers with the introduction of Lytro Cinema, the world’s first Light Field solution for film and television.
The breakthrough capture system enables the complete virtualization of the live action camera transforming creative camera controls from fixed on set decisions to computational postproduction
processes and allows for historically impossible shots.
“We are in the early innings of a generational shift from a legacy 2D video world to a 3D volumetric Light Field world,” said Jason Rosenthal, CEO of Lytro. “Lytro Cinema represents an important step in that evolution. We are excited to help usher in a new era of cinema technology that allows for a broader creative pallet than has ever existed before.”
Designed for cutting edge visual effects (VFX), Lytro Cinema represents a complete paradigm shift in the integration of live action footage and computer generated (CG) visual effects. The rich dataset captured by the system produces a Light Field master that can be rendered in any format in postproduction and enables a whole range of creative possibilities that have never before existed.
“Lytro Cinema defies traditional physics of onset capture allowing filmmakers to capture shots that have been impossible up until now,” said Jon Karafin, Head of Light Field Video at Lytro. “Because of the rich data set and depth information, we’re able to virtualize creative camera controls, meaning that decisions that have traditionally been made onset, like focus position and depth of field, can now be made computationally. We’re on the cutting edge of what’s possible in film production.”
With Lytro Cinema, every frame of a live action scene becomes a 3D model: e very pixel has color and directional and depth properties bringing the control and creative flexibility of computer generated VFX to real world capture. The system opens up new creative avenues for the integration of live action footage and visual effects with capabilities like Light Field Camera Tracking and Lytro Depth Screen the ability to accurately key green screens for every object and space in the scene without the need for a green screen.
“Lytro has always been a company thinking about what the future of imaging will be,” said Ted Schilowitz, Futurist at FOX Studios. “There are a lot of companies that have been applying new technologies and finding better ways to create cinematic content, but ultimately, it’s too disjointed. Lytro is focusing on getting a much bigger, better and more sophisticated cinematography level dataset that can then flow through the VFX pipeline and modernize that world.”
Lytro Cinema represents a step function increase in terms of raw data capture and optical performance:
● The highest resolution video sensor ever designed, 755 RAW megapixels at up to 300 FPS
● Up to 16 stops of dynamic range and wide color gamut
Integrated high resolution active scanning
By capturing the entire high resolution Light Field, Lytro Cinema is the first system able to produce a Light Field Master. The richest dataset in the history of the medium, the Light Field Master enables creators to render content in multiple formats including IMAX ® , RealD ® and traditional cinema and broadcast at variable frame rates and shutter angles.
Lytro Cinema comprises a camera, server array for storage and processing, which can also be done in the cloud, and software to edit Light Field data. The entire system integrates into existing production and postproduction workflows, working in tandem with popular industry standard tools.
Watch a video about Lytro Cinema at www.lytro.com/cinema#video.
“Life” the first short produced with Lytro Cinema in association with The Virtual Reality Company (VRC) will premier at the National Association of Broadcasters (NAB) conference on Tuesday, April 19 at 4 p.m. PT.
“Life” was directed by Academy Award winner Robert Stromberg, Chief Creative Officer at VRC and shot by David Stump, Chief Imaging Scientist at VRC.
Get a behind the scenes look at the set of “Life” at www.lytro.com/nab2016#video.
To apply for exclusive access to Lytro Cinema for professional film production, visit www.lytro.com/cinema . Learn more about Lytro Cinema activities during the 2016 NAB Show at
www.lytro.com/nab2016 .
Lytro Cinema will be available for production in Q3 2016 to partners on a subscription basis. For more information on Lytro Cinema visit www.lytro.com/cinema#video .
Read Why Lytro exited the consumer market