If you really want to measure the impact a component has on an audio set up, you need to do a double blind test.
Sometimes you see claims about audio cables that make you wonder whether you’re talking about science or interior decorating. Science is based on evidence and interior decorating is based on feelings about your surroundings. What’s more, the logical meaning of the information sent along a digital interconnect is precise. And since you should get out of the cable exactly the same numbers that you put in, then there should be no room for doubt. All of this assumes, of course, that you have the technical means to verify that the cable is passing information through correctly.
You can see that what I’m saying here is essentially that there really should be no need to test digital cables. It’s almost the case that if you ask the question, then you don’t know what you’re talking about.
There is more to it than this. If you don’t re clock the samples coming out of a digital cable, they might have been subject to “Jitter”, which can distort the sound when it’s converted back to analogue. But this doesn’t have to happen because in practise signals are always re clocked.
If a cable is really bad, or if it’s just too long, then the digital signal can get damaged, and errors can occur. This is really beyond the scope of this article because we’re talking here about cables that aren’t too long and which don’t damage the signal.
Given all the above, is there still scope for differences between cables? I don’t think so, but I’m not going to be dogmatic about this. The reason is that there are people that I’ve talked to who are either very intelligent, or completely mad, and very good at covering it up. I spoke at length to someone that makes USB cables and sells them for £400. You’d think that there should be absolutely no advantage that any USB cable, whatever it costs, should have over a cheap generic one. After all, it doesn’t matter whether I text my phone number to you, or deliver it via a chauffeur driven S Class on a silver plate; it’s still my phone number and is either right or wrong.
But it’s always good to remind ourselves that there are things about the universe that we don’t know because we can’t measure them. And the reason we can’t measure them, is because we don’t know what they are. Until we understand the things we don’t understand, then they seem silly, ludicrous, and just plain absurd.
I would say that we should adopt a dual strategy here: healthy (and quite strong) skepticism, coupled with an open mind.
To satisfy our skepticism, we need to have a rigorous test strategy. Here’s what I suggest.
First, the equipment. Everything, of course, has to be identical, apart from the cables. Everything. That’s it. It also goes without saying that the equipment should be good. There’s no point in using a set up that’s so bad that it will mask any possible differences.
As wide a range as possible, on the assumption that the audience will have different tastes. I’ve found that someone who’s never listened critically to classical music will have less idea of what sounds good than someone that was brought up on it. Likewise, you can imagine that a devoted opera fan might have difficulty with the finer points of EDM (Electronic Dance Music).
It’s very important that neither the audience nor the people in charge of the testing know which cable is being tested. If this were not the case it would be very easy for people to make their minds up in advance - or at least be influenced by their expectations - and that could be enough to sway their opinions. Similarly with the testers, body language or a facial expression could obviously or subliminally hint as to which cable is in use.
I don’t have any suggestions as to the length of the test, or to how often the cables are swapped. The only thing I would warn against is audience fatigue.
That’s it then. I don’t think it needs to be any more complicated than that. Testing like this would show if there is a statistically meaningful difference in the “sound” of audio cables. If, under these exact “double blind” conditions there was a consistent and statistically meaningful difference between the two cables, and if the preferred one was to be the more expensive one, then - and only then - would it be reasonable to say that the “better” cable justified a price difference.