Replay: There was a time when “raw” meant a recording of the unaltered brightness values from a sensor. It tended to mean “uncompressed” too, although even by 2002 there was already compressed raw in the guise of CineForm. Now, there are lots of choices for both compressed and uncompressed raw recording, and a lot of people trying to push the idea that any kind of raw represents a huge technical challenge.
That's not always true. Practically all modern cameras use Bryce Bayer's approach to colour sensors, so there's a lot of mathematics to be done on the data from the sensor before it's in any sense viewable. Record raw, and we're not doing that mathematics, so on paper, raw is theoretically less work. We're not really required to do any of that colour processing work – it'll be done later. That's something that some early raw-only cameras leveraged to simplify their engineering.
Still, in most cases, the need to have a usable view finding image means that at least some of that processing has to be done anyway. Those raw-only cameras occasionally struggled with this through the mid-2000s, with a terrible viewfinder picture even if the final image was absolutely fine. It wasn't the end of the world. 35mm film cameras didn't have viewfinders that looked much like the final image, either, and that didn't stop anyone. That's not what most people expect in the modern world, though, and happily the processing to make the raw image viewable is well within the capabilities of a modern monitor-recorder.
While every manufacturer likes to promote the idea that its own cameras demand a unique and special approach, most of the techniques are fairly well known. Some per-camera specialisation is necessary given that each sensor uses different red, green and blue colour dyes, and that each manufacturer might take a different approach to representing brightness. Still, it's a matter of opinion how essential it really is. Open-source programs such as DCRaw have, for years, done a perfectly serviceable job of processing raw stills camera images without the involvement of the camera manufacturers.
Third party firmware such as Magic Lantern has done similar things. Some of the Bayer-decoding tweaks that exist trade off one thing for another – sharpness versus noise, for instance, so there's a lot of compromise involved.
Either way, for a raw recorder to receive a camera manufacturer's blessing, it will need to decode images to the satisfaction of that manufacturer. The lion's share of the human effort involved in supporting any particular camera for raw recording is, then, not the actual compression. Once that's done, it's done. The effort per camera is in supporting that particular manufacturer's approach to sending raw sensor data down an SDI or HDMI link and decoding it appropriately for display.
When it comes down to actually recording the pictures, raw recording makes a lot of sense even just from a disk space standpoint.
A 4K camera might have a sensor around 4,000 photo sites across. If we're not recording raw, we're doing a lot of work to turn that single sensor image into an RGB image with three channels, each of which is around 4000 photo sites across. Then, we're compressing that. We're doing work to create more data, and then more work to create less data. Other than view finding, as we've seen, there's actually no reason to do that; we might as well compress the raw sensor data straight away.
The compression techniques involved might be tweaked slightly compared to the techniques used on finished images, but the basic mathematics are likely to be similar. Some early compressed-raw cameras simply used general purpose image compression chips. There will still, usually, be some processing of the sensor brightness levels to make best use of all the numbers the file can encode.
Other than that, once we've split up the red, green and blue records from the Bayer sensor, even without converting it to a full RGB image, it's still picture data, and it compresses fine.
So things like ProRes Raw and Blackmagic Raw are far more of an exercise in standardisation and agreement between recorder, camera and post software manufacturers than they are novel compression technology. Both use the same sort of underlying mathematics, the discrete cosine transform, that goes all the way back to things like JPEG, DV, HDCAM and ProRes itself.
Other compressed raw formats, including CineForm and Redcode, use discrete wavelet transforms, demanding more computing power (and so more size, weight, battery power and cost) but generally create better pictures for a given bandwidth. All of them will have specific extensions to support high bit depth and efficient encoding and decoding but at some level, compression is compression.
Compressed raw workflows offer us a lot and there are technical ways they make a lot of sense. Converting a Bayer sensor's data into a viewable image then compressing the result is, on the face of it, somewhat nonsensical, and there's a strong argument that compressed raw makes better use of storage space.
The practicalities of colour science and compatibility are more complex, especially given the simultaneous existence of Blackmagic Raw and ProRes Raw, which do broadly the same job in the same use case. About the only other downside is that flexibility in post is great until it's used, or misused, to deliberately or even mistakenly alter a DP's work, but that's something people have been dealing with since the widespread introduction of log workflows.
It's an organisational and even an interpersonal issue, not a technical one.
The big question is how much compressed raw really adds over log, especially given that log recordings tend to have wider compatibility with post software (though that's changing, incrementally).
In an ideal world, there's perhaps not that much difference once the result hits the viewer's retina, but the decision in the end, as so often, depends very heavily on what we're trying to achieve.