2 Comments

We seem to be experiencing a minor revolution in audio product testing. For the last two decades, audio product testing has been almost entirely subjective, rarely based on anything more than the opinion of a single listener, formed in uncontrolled, sighted tests. Until recently, SoundStage! was one of only a few audio publishing outlets presenting controlled, objective testing—specifically, audio measurements. But recently, measurements have become more common on websites, online forums, and YouTube. As someone who, since the late 1990s, has been nagging for more audio measurements in reviews, I should be happy about this—and I am, but it has me concerned, too.

The reasons for the renewal of interest in audio measurement are the rapid growth of free or low-cost measurement gear, such as Room EQ Wizard and the miniDSP EARS, and the relatively recent development of science-based standards such as the Harman curve for headphone measurements, CTA-2034-A for speaker measurements, and CTA-2010 for subwoofer output measurements.

Video

Those are all welcome developments, but as I simultaneously celebrate and lament the 30th anniversary this month of my first published audio product review (of the AudioSource SS Three surround decoder/amplifier, for Video magazine), I can’t help recalling the numerous times when I and my colleagues have put too much faith in measurements, trusting that we didn’t need to devote time to hands-on evaluation because the numbers told us everything we needed to know. I’d like to share those tales with you, with the caveat that all data included below are to the best of my memory.

Episode 1: the laserdisc players

This first event occurred early in my career, which began just as the first video products with digital signal processing (DSP) started to hit the market. Before that, we could safely judge most video products almost entirely by their measured resolution and signal-to-noise (S/N) ratios. But while many of the new products with DSP had very high S/N ratios, some also produced digital artifacts that the eye could easily detect but the measurements could not.

For example, in one of the first blind product comparison tests I conducted, we rounded up four leading laserdisc players in the $1200 (all prices USD) price category, all of which used DSP. As high and low anchors, we chose the Pioneer LD-S2 (a $3500, DSP-equipped reference player that had the highest S/N ratio our technical editor, Lance Braithwaite, had ever measured) and a $400 garden-variety Hitachi player (actually a rebadged Pioneer design). We expected all four of the $1200 players to beat the Hitachi, and the LD-S2 to beat the $1200 players.

Video

Sure enough, the LD-S2 beat all the $1200 players—but to our amazement, the analog Hitachi player beat them all. As I recall, it delivered better subjective resolution because it didn’t have the image-softening digital artifacts the others did. And as we’d already started to realize, most of the laserdisc players on the market had signal-to-noise ratios high enough that the noise wasn’t visible, so higher S/N ratios didn’t matter—just as you can’t hear the difference in noise between audio products with S/N ratios of, say, 90dB and 120dB. We later encountered similar experiences with camcorders, where products that measured a few dB better sometimes produced a visibly inferior picture.

Episode 2: the subwoofers

I encountered a similar situation a few years later, around 1998, when I tested a half-dozen high-end subwoofers for Home Theater magazine. At the time, the most—and really only—significant measurement of a subwoofer was considered to be its frequency response, which was usually measured by putting the measurement microphone about 1/4″ from the woofer (and if applicable, the port or passive radiator, and then summing the results). This measurement had to occur at a very low signal level, or it would push the microphone into distortion.

By this measurement, the best subwoofer of the bunch was a slim, dual-10″ sealed model from Von Schweikert Audio, which I measured with a -3dB response down to 19Hz. The worst was a 15″ ported model from B&W (in the decades before and after the company spelled out Bowers & Wilkins), which had -3dB response down to just 30Hz. Yet in my blind tests, the panelists raved about the B&W’s deep bass extension and complained that the Von Schweikert didn’t have enough bottom end for action movies.

Baffled by this, I ran every standard test I could think of to explain the difference (including pulling the plate amps out and measuring those), and then I started concocting experimental measurements. I finally got my answer when I used an Audio Precision analyzer to measure the subs’ output in dB versus distortion, at 20, 40, 60, and 80Hz. This measurement lined up perfectly with the listener impressions—the chunky B&W delivered ample output at 10% total harmonic distortion at 20Hz, while the slim Von Schweikert was down about 15dB under the same conditions. It appeared the Von Schweikert sub had been EQ’d with a bass boost to make it flat down to 20Hz (still a common practice with small subs), but its amp and drivers didn’t have the physical capacity to deliver those low frequencies at a useful volume.

CTA-2010

The CTA-2010 standard eventually codified subwoofer output measurements, which have become a common component of subwoofer reviews for some publications. But that left a legacy of about two decades’ worth of subwoofer measurements that misrepresented the products’ capabilities—and we still see this misrepresentation on some subwoofer spec sheets.

Episode 3: the consulting gig

Still more years later, I was hired as a consultant by a mass-market electronics company trying to figure out why its audio products were getting such bad reviews. (I was reviewing only high-end speakers and projectors at the time, so it wasn’t a conflict of interest.) Another consultant had set them up with a 100-point performance assessment scale based entirely on measurements, but by reviewers’ judgments and the company’s own admission, competing products that scored only, say, a 65 on that scale sometimes clearly sounded better than one of the company’s products that scored 80.

The frequency-response measurements that the previous consultant had prescribed were fine in general, but they were developed for reasonably high-quality speakers, just as the CTA-2034-A standard was. But with the tiny, bass-challenged woofers used for cheap soundbars, home-theater-in-a-box systems, Bluetooth speakers, and TVs, distortion sometimes becomes the overwhelming consideration. In these products, it’s often a good idea to build in a boost at, say, 250Hz, which can at least create an impression of decent bass response without pushing the driver beyond its limits. It may also be a good idea to high-pass filter the woofers well above the frequency where distortion becomes a problem, and to add a corresponding high-frequency roll-off, which makes the system sound subjectively full and balanced even if there’s little response below about 200Hz.

Traditional speaker measurements would tell you these products are terrible, even if your ears might tell you they’re OK. The solution I recommended was to compare the company’s upcoming products with competing products in blind listening panels, using people outside the engineering and marketing teams. (I don’t know the end of the story because one of the company’s competitors hired me away soon afterward.)

The bottom line

In all of these cases, only by our subjective experiences did we realize that judging the products entirely by the measurements was a big mistake. But subjective assessment is often missing from today’s measurement-oriented reviews. Sometimes the product is simply given a pass-fail grade, and it’s condemned if the measurements don’t hold up to the standards noted above. Or, in the case of electronics, the product might be dubbed a failure purely because it has, say, a worse S/N ratio or more jitter than some competing product, even though the variations we usually see in these measurements have no proven correlation with listener perceptions.

I consider myself lucky to have worked over the decades for several publications that not only paid for my measurements but also were willing to fund the innumerable blind panel tests I’ve conducted. These tests let me put my measurements in perspective, and they’ve taught me how uncertain the results can be when you ask someone to listen to an audio product and tell you what they think of it.

I’m glad we have scientific standards that give us a better idea of how audio measurements correlate with subjective performance assessments. For instance, I’m thrilled that we have the Harman curve as a good general target for headphone response. Yet I rarely see reviewers mention the fact that there are really six variants on the Harman curve: three for headphones, three for earphones. (Read more about that here.) And I know from the years of reader response I’ve received, in comments sections and through e-mail, that many audio enthusiasts prefer a more trebly response, and some prize spaciousness over accurate frequency response.

Harman curve

And what will happen as more headphones incorporate DSP, and engineers put less effort into acoustical design and instead just use the DSP to EQ the headphones to the Harman curve or some other target? Will we see something similar to what happened with video products, where a major change in the way the products were designed introduces new artifacts that our old measurements couldn’t detect?

The only way we can be sure is if manufacturers, reviewers, and audio enthusiasts continue to perform subjective assessments of audio products and don’t put too much trust in the machines.

. . . Brent Butterworth
This email address is being protected from spambots. You need JavaScript enabled to view it.

Say something here...
You are a guest ( Sign Up ? )
or post as a guest
People in conversation:
Loading comment... The comment will be refreshed after 00:00.
  • This commment is unpublished.
    Pete · 4 years ago
    Before all this hifi work I seem to recall you wrote for 1 or maybe more cycling mags. Very basic articles as I recollect.
    • This commment is unpublished.
      Brent Butterworth · 4 years ago
      I had forgotten about that! Bicycling magazine flirted with the idea of hiring me in the early '90s, and in the process I did a couple of articles for them, I think one on diet and one on taking tech products on bike tours.
  • This commment is unpublished.
    Mauro · 4 years ago
    Long life to Brent! (^)
    I wish that experienced audio reviewers would be willing to share their experiences as you did here and editors approve this kind of initiative..

    I feel the need of someone explaining to us readers how do measurements correlate with perceived sound. Especially electronics..does a -140db background noise matter? does intermodulation mean anything? And what about jitter? I don’t really know.. (wm)

Latest Comments

a href="https://bestserviceis 19 hours ago Technics EAH-AZ70W True Wireless Earphones

Istanbul is a city that offers a perfect blend of history, culture, and world-class services. ...
If you love music and want to experience it like never before, I’d definitely recommend ...
I’ve been using the Sanwear GT earbuds from San Sound for almost a year now, ...
end of tenancy cleaning price 2 days ago Technics EAH-AZ70W True Wireless Earphones
End of tenancy cleaning prices are designed to be affordable and fair, ensuring that tenants ...
Hi Brent and David, I ordered your new headphone test album back on the Halloween ...
Man I ran a set of these headphones as A Bluetooth speaker and they go ...
headphonejack 2 months ago Is Accuracy the Only Option?
Is accuracy measurable? What do you consider the most accurate headphones? Or maybe top 5?
Doug Schneider 2 months ago Is Accuracy the Only Option?
It's an interesting point that's made. The only thing I'll say contrary is that with ...
Doug Schneider 3 months ago PSB M4U 9 Headphones
@steve pageThis problem is an interesting one. If you see this post, let us know what ...
Doug Schneider 3 months ago The Rise and Fall and Maybe Rise Again of MQA
@AndrewI don't know who was actually employed, but I have to say that the original ...