The recent announcement of raw over HDMI from the Nikon Z6 to Atomos recording devices is much further reaching than it might seem at first.
Atomos' Announcement that the Melbourne-based company supports ProRes Raw capture from the Nikon Mirrorless cameras Z7 and Z6, which we covered here is interesting enough. The idea that you can capture raw 4K video from such a small, inexpensive rig - and through HDMI - is something of an eye-opener.
But there's more to it than that. I spoke to Jeromy Young, CEO of Atomos on Tuesday, during a remote press conference.
The current partnership with Nikon came about a year ago. Jeromy, a fluent Japanese speaker, visits the camera manufacturers regularly, and it is this pattern of behavior that led to the opening up of HDMI to make it suitable for recording in the first place. In the early days of Atomos, many DSLR cameras effectively crippled their HDMI outputs by making it impossible to switch off the menu overlays that you normally see in the viewfinder. We don't know whether this behaviour was intentional or not, but the result was the same: Atomos users were largely hobbled by the inability of the camera to supply a clean picture.
In the last few years, that's changed, and, meanwhile, dedicated 4K video cameras have appeared that attempt, and largely succeed, in being what DSLR video users always craved: ergonomic form factors with good - arguably cinematic - video characteristics.
Now that Atomos is a much larger, established company, the discussions with the Japanese manufacturers are at a much deeper level.
It's important to understand that raw recording can only take place over HDMI if each party - Nikon in this case, and Atomos - have complete control over their hardware and software. It's no good trying this with third party HDMI receivers or transmitters. This level of hacking (I use this as a compliment and not pejoratively) means that the two parties can extract performance from HDMI that it was never provisioned for, with no collateral damage. (Jeromy assured me that a TV set can't be damaged if it's accidentally sent raw video over HDMI, and that they're aiming to deliver a data packing system that might allow a monochrome picture while carrying raw, just so that it's possible to make sense of what's happening, even if there isn't an Atomos recorder on the end of the cable. This does sound impressively clever to me).
Atomos and Nikon have been testing this arrangement for two and a half months now, and the results are apparently very good. But it's not quite road-worthy yet, so delivery to customers is expected to take a few months yet.
It's not just Nikon that Atomos has been talking to. There are now 11 cameras whose raw output can be captured into ProRes Raw.
Jeromy said "The request has gone out to all camera manufacturers: Can you give us raw?".
The raw video itself is delivered purely as data as opposed to video. It's important to understand this distinction because of course digital video is digital too. But with Atomos's raw-over-HDMI system, the video is in the form of a moving blob of data that can't be interpreted as video by anything apart from an Atomos device. This may sound restrictive, but is actually not, and is necessary, because it's the only viable way to get all the data, with much more information than debayered, sub-sampled conventional digital video, off the camera and into the recorder. And there's metadata too, which can include camera postion, shutter angle, lens data and and probably the local weather forecast. This would need a separate artitcle to itself and we will write one.
Now here's where it gets really interesting. This is where Jeromy started to give more detail on the actual processes involved in getting video all the way from the sensor to the recorder. I'm paraphrasing him but this was based on notes typed as he was speaking.
"If you look at sensor processing, silicon capturing light from a lens - a bunch of errors happen. There is processing in the camera to correct this, and make sure that standard issues are taken care of - there's an amazing system to clean up the pixels. they used to have to spend a lot of time then cleaning up the image to make a proper picture before compression
Now, sensor clean-up still matters, and there's stabilisation on top of that. Then there's the image processing pipeline, but ProRes Raw is moving this to the computer. "
What he means by this is that the debayering is moved away from the camera and into the NLE or separate grading software. But where a camera previously (before it could output raw) gave a look to an image - and sometimes this might have been a very desirable look - Atomos is now working with the manufacturers to be able to recreate this look actually in the NLE.
"But Atomos is working with the camera manufacturers. We will probably be able to match the manufacturer's processing within the NLE. So you can stay in the camera manufacturer's look and feel, or you can step outside of it."
There's a whole new distributed architecture being suggested here. To explain it, here's an analogy.
There's a fashion going on right now in electronic music where you can mix and match components from different synthesiser manufacturers, to build your own modular synth using a virtually unlimited diversity of parts. So, for example, (forgive the geeky terms here but if you're into synths you'll understand this very well) you could use an oscillator from Moog, a filter from Korg and an envelope generator from Roland. You can take the essence of a process from several manufacturers and mash them up in ways that weren't possible before, because, in the case of video, they were embedded (trapped, essentially) inside the camera.
So my reading of this - highly speculative, of course - is that we're moving towards an architecture where in-camera characteristics, expressed through previously embedded processing, are able to be used and combined in ways that weren't possible before. It's another example of virtualisation, which is going to be a big enabler for distributed processing. One side effect of this will be that it's sometimes going to be hard to tell exactly where the processing is taking place (in other words: which device is doing what?).
And it's worth remembering what an unusual situation Atomos is in. It's rare for Japanese manufacturers to open up to this extent to third parties. It's rare for other companies to be so deeply in control of their hardware and software stacks. And it's rare for third parties to have such deep and persuasive relationship with these camera manufacturers.
Since it started, Atomos's core technology has been around recording. It may have started using off-the-shelf HDMI chips but now it bypasses that and has complete control over HDMI receivers. It can do anything that doesn't cause a fire.
I wonder if at some point in the future this type of modularity and distributed architecture will become commonplace? In a way, it already is. There are all kinds of plug-ins for NLEs, but this is a new field: the transfer of the camera's basic processing to a computer. Arguably it's already been done by Blackmagic, with the close relationship between the its cameras and its own software powerhouse, Resolve.
We could all go on speculating for ever about this but what's clear is that, now that cameras have mostly reached a point where they're all very good, it seems likely that the focus of development (across all manufacturers) is going to be behind the scenes. And at unpredictable times in the near or mid-term future we're going to see surprising and inspiring changes as an architecture that's been fundamentally fixed since the birth of digital cameras, gets blown apart and reassembled in a much more flexible, pliable and manipulable way.
So, in order to stay on the pace, don't just look at specifications. As about architecture.