<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

The K|Lens One is a revolutionary lens that records depth information

The K|Lens One. Image: Zeiss.
3 minute read
The K|Lens One. Image: Zeiss.

The K|Lens One purports to be a revolutionary new type of camera lens that captures depth information along with the standard image. What's it all about?

It takes quite a lot for us to break our self-imposed rule on not covering crowdfunded projects. While many of them are put together by well-meaning people, they have a rocky reputation for delivery or lack thereof. Today we're making a extremely rare exception because the project represents a move toward lightfield cameras. It's not a new idea, but it represents the biggest change in fundamental camera layout more or less ever, and this project might even make it an optional add-on for existing gear.

K-Lens isn't actually even a Kickstarter yet, asking anyone who's interested to sign up for a notification service. Still, it's among the more convincing crowdfunds, with Zeiss involved and twenty pre-production prototype lenses already under test. We have to be careful about over-interpreting the information we're given, but if the demonstration material shown was shot on one of these prototypes then the project seems reasonably well-founded, simply because making even one of these lenses represents a large mountain to climb in the first place.

The K-Lens One is a kaleidoscopic lens, hence the initialism, designed to divide the sensor up into nine rectangular regions each one-third its width and height. Each of those regions receives an image of the scene as viewed from a slightly different position, creating a three-by-three lightfield array. Software (supplied) can then create a depth map by comparing the small differences between those images. Ideally, then, this lens can turn more or less any camera into one capable of shooting a depth image.

Lightfields in general are not new. The Fraunhofer research institute was pushing some extremely accomplished lightfield tech at least as early as 2015, although the industry seemed largely indifferent. Lytro tried it with mixed success. Arguably, cellphones that use more than one camera simultaneously to achieve depth detection are using a very elementary version of lightfield technology. The Apple trick of moving the camera around to get an idea of depth information in the scene creates almost a temporally-separated lightfield.

The idea is the same as stereo 3D; to estimate how far away something is by looking at it from different angles. Humans (and owls) do this by having two eyes, but also by moving their heads around. Add more eyes, move those eyes apart, and things become more accurate. Comprehensive and accurate depth information is such a boon for visual effects and grading work that it's almost surprising that it hasn't become more mainstream.

Why hasn't it happened before?

Part of the reason why it never happened is traditionalism in the camera department. Lightfields are a massive departure from the technology that we've been talking about since the Islamic Golden Age, when the Basra-born Ḥasan Ibn al-Haytham (let's get it right: أبو علي، الحسن بن الحسن بن الهيثم), the "father of modern optics", described, with staggering accuracy for the time, how eyes work. Yes, that Basra. With a lightfield, though, no longer do we have one lens that costs as much as a nice car landing an image on a sensor the size of a modest agricultural plot. We have a large number of small cameras, which don't really behave much like an Alexa. 

Or we have one camera and a lens like the K-Lens One.

It should operate like a normal camera; the package includes a 5" display designed to isolate a single perspective for framing. The downsides are that there are only nine images and that they're not widely separated. Fraunhofer did well with a four-by-four matrix eighteen inches square, creating sixteen pictures. The K-Lens One creates nine images and the base leg of the virtual rangefinder is fairly small. The data suggests, rather fuzzily, that it might be 6.25mm, and a quarter of an inch is not a lot of separation on an 80mm lens likely to be used to shoot things quite a long distance away.

Decreased number of views and decreased separation between the individual views reduces the signal-to-noise ratio of the technique, and while simply increasing the separation between a small number of cameras creates problems of its own, the small number of images combined with the small separation won't help. Used for grading, depth maps don't necessarily have to be perfect, but for maximum usefulness in VFX, well, more is more. This lens will rely on a high-resolution, low-noise camera.

Speculation aside, this is at least a good reason for 12K sensors. The organisation behind the lens claims that careful mathematics can reconstruct an image approximating half the resolution of the sensor, as opposed to the one-third we'd instinctively expect. There are sundry other concerns, particularly the fact that this is an 80mm, f/6.3 lens which won't work for everyone in every situation; it's also ten inches long and weighs a kilo and three quarters. It's a bulky, expensive, (very) slow 80mm.

It's also a determined attempt to commercialise something that's been hovering on the edge of acceptance for a while, and represents the first real change in fundamental camera technology since... cameras.

Tags: Production News Lenses

Comments