Thinking of monitoring video from your laptop using HDMI? You might want to rethink that for the following reasons.
AJA’s announcement of the T-Tap Pro Thunderbolt output device gives us another way to get high-quality pictures out of laptops and other small, portable computing setups, this time adding 4K support. Similar things have existed for a while; Blackmagic has the Decklink Mini Monitor 4K for people who want a small, simple PCIe card, and the UltraStudio Monitor 3G for anyone who needs Thunderbolt.
People might ask what the point of any of this is. Many laptops already have HDMI outputs capable of driving most common displays, and it’s possible to use them to drive a reference monitor and do some grading, for certain values of “reference” and “grading.” There are even USB3 devices that’ll give us a spare HDMI output if there isn’t one by default. Still, for a couple of reasons, it’s not necessarily a great idea to couple your MacBook up to a digital cinema projector and tell the producer “this is what it’s going to look like.”
The first issue is colour accuracy, and that’s a reasonable concern. Even if your target monitor supports the colour setup you’d like to use, graphics cards built to render video games are likely to make all sorts of attempts to be helpful when given a picture to display, and that might mean altering things. The result is that the numbers we put in our graphics files might not always be the numbers that end up coming out of the HDMI port.
As a practical matter, based on calibrating a monitor driven by an I/O card, and then measuring it again being driven by a GPU in the same system, it turns out it’s at least sometimes not a big deal. The colour inaccuracies as introduced solely by the video output chain in the computer tend to be fairly small. That’s far from guaranteed, it’s just what we’ve seen in a few spot tests using rather basic probes, and there were errors which would be just about visible if viewed side by side. They were, however, no worse than the out-of-the-box performance of some midrange monitors promoted for colour accuracy. That does assume that we set things like studio- or full-range video output settings as required, don’t move the colour controls in the control panel, haven’t told the operating system to use a low-blue-light colourscheme in the evening, and so on.
And of course, if you plug your monitor into a spare GPU output and calibrate it, you’ll tweak out many errors anyway. That’s not great, given that it creates a monitor that’s really only suited for use with that specific setup, but then again for really critical work the surrounding environment should be taken into account anyway. Also, that won’t work if the setup is deliberately limiting minimum or maximum signal levels to something that simply precludes reasonable calibration. Again, limited experience suggests generally they don’t. That can change tomorrow, without notice, so if you do this, ensure the drivers can’t be updated automatically.
In short, accurate colour on GPU outputs can work, but it can also be a bit of a minefield, and it’s understandable that many people prefer to use more predictable hardware.
Perhaps the more visible problem, though, is that of frame rate. Computer desktop displays often run at 60fps, although it’s generally possible to change that. Often they won’t go as low as 24, but they’ll usually do 72, which ought to make it possible to display each 24fps frame three times for smooth and consistent animation. And it is technically possible to do that, although most software actually doesn’t.
A simple media player like VLC generally times its efforts based on the sound playback, which will generally be sending something like 48,000 samples per second to a sound card. That means that every 2,000 sound samples, one 24fps frame should be displayed, and that sounds enticingly simple. The problem is that the electronics generating that 48,000 sample per second rate are not the same electronics creating the 72fps rate at which the monitor displays frames. Resampling sound without creating audible artefacts is difficult, so the usual solution is simply to update the frame which is being displayed every 2,000 samples, without worrying about how long the frame is displayed for.
Practically speaking, this mean that usually, generally, mostly, if we set the monitor to display at 72Hz, we’ll see each 24fps frame three times, which is coincidentally very similar to how cinema projectors used to work – just without the black gaps for the shutter to pass. If the sound card is running slightly fast, we’ll eventually see a 24fps frame twice, or if the sound is running slightly slow, we’ll see a frame four times. If the monitor is running at 60fps, we’ll see something like an alternating pattern of two repeated frames, followed by three repeating frames, since 60 ÷ 24 = 2.5, again, with occasional excursions.
That’s not generally very visible – basically all consumer-targeted video playback on computers, phones and tablets works like this – and, again, it doesn’t mean it’s impossible to do good work that way, but it’s not something for final quality control assessments. Experienced eyes might easily see this as dropped frames, errors with frame rate conversion, or problems with motion graphics or visual effects.
These are all problems that could be worked-around in software. Graphics card manufacturers could implement a creator mode which guaranteed numerically transparent output, and software engineers could write their code to create accurate frame rate playback, at least when the monitor frame rate is an integer multiple of the video frame rate (and advise if it isn’t). Some monitors and GPUs also support things like Nvidia’s G-Sync, which allows the computer to control monitor update rate to some degree. It’s intended for gaming, but could probably also be used for more accurate video playback.
All that’s quite a big ask, though, so it’s probably better just to pick up an AJA T-Tap or Blackmagic Ultrastudio Monitor, and enjoy increased confidence in what you’re seeing.