Why Bi(amp/wire) Speakers?

Pretty often you can say that if one is good, two is better. Works for wheels on a bike, earphones, cars if you have a spouse, beers, outboard motors…well, lots of things. So if one speaker wire is good, then two MUST be better, right?

Eh-hem…well…not necessarily. When it comes to wiring speakers, there are two popular, but very different, methods of adding wires to speakers. They are Bi-Wiring, and Bi-Amping.

Bi-Amping is fairly easy to understand, but it two comes in two flavors. The common one is where a two-way speaker has it’s crossover designed in such a way that the tweeter circuit and woofer circuit can be separated at the speaker, then each driven with its own amplifier. The amplifiers receive identical inputs, and can be simply Y corded together. Their outputs are wired to the speaker terminals with separate runs of speaker wired, each attached to its own speaker connection, one for the woofer and woofer crossover, one for the tweeter and tweeter crossover. And there you have it…Bi-Amped speakers.

The second style of Bi-Amping still uses two amplifiers, but they are wired directly to the woofer and tweeter without any passive crossover between them and the driver. The crossover becomes an active device placed just ahead of the amplifier and, hopefully, optimally tuned for the speaker. That’s where it gets a bit tricky. That crossover you might have bypassed was designed specifically for the speaker, so replicating it with an active device may not be quite so straight-forward. You could end up inadvertently re-engineering the speaker with your active crossover with some very different results. More on that later, but that’s the second, somewhat less-common way to bi-amp speakers. It’s not usually done in consumer systems, but is frequently found in pro systems.  In fact, the TMH Tesseract Speakers are tri-amplified with a fully active line-level crossover designed specifically for them.

Bi-wiring also comes in two flavors. The first and most simple is also almost completely pointless. You simply run two lengths of speaker wire from your AVR or amp to your speaker. The two runs are in parallel, and tie together at the amp and again at the speaker. What you have accomplished is effectively doubled the cross-sectional area of the wire, and cut the wire resistance in half. That’s great, if you needed to do that, but you probably don’t. If you’ve used #14 for a short run, or #12 for a longer one, the wire resistance is already low enough, cutting it in half won’t matter. You will also have doubled the wire capacitance, but that’s been completely swamped out by the amp and speaker already, so not real change there. You’ll have cut the wire inductance in half, but again that should, if you have even average cheap wire, already be a non-factor. Overall, if you’re thinking of bi-wiring this way, thing again, and find the next bigger wire gage and just do one run.

The second, and rather odd method of bi-wiring a speaker is if the speaker crossover design permits separate drive for it’s woofer and tweeter, run separate wires from the amp to the tweeter input on the speaker, and the woofer input. They are, of course, tied together at the amp. What does this do? Not much. Proponents seem to think it prevents the tweeter and woofer from interacting in the wire. But the interaction that could occur, would only happen if the wire were of significant resistance. Your speaker wires don’t have high resistance, do they? The Bi-Wire-ers also think this reduces intermodulation distortion. Nope. All IMD happens in the speaker itself, not the wire. IMD is a product of something non-linear in the system. In any sound system the most non-linear element…like by a long-long way…is the speaker. Simply sending separate wires to its woofer and tweeter doesn’t change that even a little. Again, using heavy enough wire in the first place would eliminate any concern.

So long as we’re about it, one more thing to mention. The Bi-Wire Crowd also sometimes likes these little wire supports that keep your wires properly elevated off the floor. Some of these cost over $100 each! There are various stories about how these work. I choose the word “stories” because they don’t qualify as “theories” at all. Some claim laying wire on a floor distorts the electric field around the wire, and therefore the signal they carry must also surely be distortion. The results are the usual claimed improvements in non-quantifiable (or even definable) characteristics like “focus” and “detail”. But what they actually do is… you guessed it…drain your bank account. Your bank account will surely have less focus and detail when they’re done, and that’s really all they do.

Bi-amping has a point if you plan to re-engineer the speaker crossover design. Bi-wiring has no point other than, like the cable lifters, placing a heavy load on your net disposable.

On the other hand, a good professional calibration job does make an measurable, verifiable, and definitely noticeable improvement. Hit our main web site and contact us if you’d like to get the most out of your sound investment with a pro calibration.

Dolby Atmos AVRs are coming….

The list grows…again. This time it’s been bumped up to Denon/Marantz, and Onkyo/Pioneer (yes Onkyo has gobbled up Pioneer).

Most Atmos capable receivers are found at the top of the line, and in many cases (D/M), the new AVRs are not slated for release until fall.  And there’s a new way to specify channels, i.e., 7.1.4…means 7 channels, L,C,R, Sl, Sr, Bl, Br, plus 1 sub (one sub? Really?) and 4 Atmos speakers (positions TBD, but we think front high and overhead somehow).

Pioneer has announced Atmos speakers, which apparently are meant for those who don’t want speakers in their ceilings. How they work is not clear yet. Expect entire lines of Atmos speakers, though.

What will I need for my first dose of Atmos at home? Basically, a few things. An Atmos AVR/Pre-pro, which may actually be the easiest part. Then you need more speakers, probably at least four more, perhaps more if you are at 5.1 now. And lastly, content. Wee now have promises for Atmos content on BD for as early as this fall.

Two cautions: First, if you have already done height and wide speakers for some other process, like Audyssey DSX, you may have to move them, even add to them. If you’re considering that now, hang on a few months and get it right for Atmos. On that commitment, well, we don’t know how prevalent Atmos content will be, but in theaters, there were almost twice the Atmos releases than 7.1 releases in 2013.

Finally, we don’t really know how well Atmos will scale to smaller systems with fewer than 64 or 32 speakers. Trust Dolby to be all over that one, but there will no doubt be compromises.

Lets all adopt the obligatory wait-and-hear attitude.  And I promise to leave Atmos alone for a bit on this blog.

Elevating your Audio – Nowhere to go but UP!

According to research done at Bell Labs in the 1930s, the minimum number of speakers/channels necessary for acceptable stereo soundstage is three, left, center and right. Of course, we had two for stereo, not three, which means by extension that stereo doesn’t do a reasonable job of producing a believable soundstage! Surprised?

But lots has changed, and we have had more than two for quite some time, actually decades if you include the rather bewildering Quadraphonic days of the 1970s. As for music, I heard my first 5.1 music at a demonstration held at USC in the mid 1990s hosted by Tom Holman (Inventor of THX, now holding the title “Audio Direction” at Apple). The demo was spectacular to say the least. A few years later I providers some engineering assistance with another Holman project, the IAMM (International Association for Multichannel Music) conference. A long story for another time, but the demo room had 8 possible channels to play with. At that time Tom advanced that the obvious improvements in sound dimensionality became apparent to all listeners every time the number of channels doubled. Mono to stereo, stereo to 5.1. So his next step was 10.2. I heard that demonstration at Bjorn’s in San Antonio several years later. Bjorn had built a room dedicated to 10.2, with M&K 2510P powered THX Ultra speakers, and M&K subwoofers. Tom’s 10.2 speaker plan included “height” and “wide” channels, and differs slightly from some of the 11.2 plans of today. However, the demo, held in a dark room, included actual 10.2 desecrate recordings. To day, I’ve not heard a more believable sound space reproduced anywhere.

So here we sit in the middle of our 5.1 or 7.1 speaker arrays, happy as clams. Or are we? HT fans are always wanting to advance to the next level, and not there’s nowhere to go but up. A few years ago Audyssey introduced DSX processing that could drive 9.2 or 11.2 speakers. It includes Tom Holman’s “height” and “wide’ channels, but stays with the basic 7.1 plan as its base. DSX is a bit of a fake-out, because it has to be. There is no 11.2 material in the world, so it has to do something to play those speakers, and what it does is create artificial reflections that create the audible illusion of a larger room with high ceilings. It’s a nice effect, very convincing, but there’s really no material on the soundtrack directed specifically up there.

Time marches forward, and Dolby introduced their theatrical Atmos system a few years ago. Rather than deal with a specific channel count, they say it can do “up to 64” speakers, which also implies it can do less if required. Good thing too! That’s more speakers than most theaters would be wiling to install (read: pay for). Atmos doesn’t work like 5.1, where thera are channels directed to speakers. It works by directing sound to locations in an object oriented manner, really a different approach. But it doesn’t get us out of the requirement for more speakers in different places. And, turns out, the biggie is height.

Where do I get me some Atmos? Well, an Atmos equipped theater logically, and there’s a list of them on the Dolby website. But as of this month several home AVR manufacturers have announced new models targeted to the European market that are “Atmos Ready”, or have some reference to Atmos. Those would be Denon/Marantz (sorry, no links yet) and Pioneer…and counting of course. These are big AVRS, they also have up to 11 power amplifiers, so you can at least some extra speakers in unusual places. But that’s only part of the battle, the other is, where to get some media with Atmos tracks. Yeah, that’s a problem right now. It’s not that the material doesn’t exist, of course it does. In fact, the number of films mixed with Atmos in 2013 is nearly twice that mixed in 7.1, so they’re making tracks, we just don’t have home access yet. So the cart is before the egg, the horse is in front of the chicken, whatever, but at least we will soon have Atmos capable AVRs, whatever that actually means.

The old adage of “If you want a sound there, you have to have a speaker there” still works just fine, thanks. But “there” is a bit more of an issue at home, especially if you don’t have a dedicated home theater room. We don’t know much about the home version of Atmos yet, but it’s safe to assume that height channels will be important in front, possibly wide channels, and probably a pair over your head, but it’s all conjecture. And Home Atmos has to be a little smart…it will have the fewest actual speakers to send sound object to, nowhere near the full possible 64, or the 32 or so they put in some theaters. But that’s actually a minor concern for the Dolby Wizards to work out.

Our problem is where to put all those speakers. If you though finding room for 5.1 was hard, and 7.1 challenged your interpersonal relationships, well gird up for 11.2 or more. Just get used to it, it’ll never really stop until we have full holographic audio which, we hope, will be infinite resolution of any virtual point source in a 360 degree sphere, all emanating from some sort of acoustical holographic emitter. But until then, or in case we die first, Atmos may be the new sound wave of the future. However, lets not get ahead of ourselves. All we have is new AVRs that are Atmos Ready, we still need the speakers and speaker plan, and Atmos encoded material to play at least one good demo.

For Atmos, like 3D, time will tell.  I will, of course, settle for the Holodeck 1.0 if/when….

Change in my UltraHD/4K position


OK, I’ll admit, I’ve been a little stubborn. But really, and I don’t plan to change on this one…”Ultra HD” is a stupid name. Yes, I know it’s “better” than HD, but what do you do next? Super Ultra HD? Then what? Mega Super Ultra HD? Oh, come on. So as a protest, I’ve refused to use the term in favor of 4K, which is simple, direct, and actually means something.

Except it doesn’t. Or rather, it means too many things. First of all, 4K in the DCI (Digital Cinema Initiative) means 4096 pixels wide. But they’ve left the height figure variable to cover several different aspect ratios and purposes. So keep the number 4096 in mind, now lets see why UltraHD isn’t 4K.

Because it’s 3840 wide by 2160 high…once again there’s that dumb 1.78 aspect ratio that never existed anywhere but TV…a frame size that doesn’t match anything except itself, meaning to fill it with un-cropped “real” 4K movie, every pixel must be subjected to scaling, and scaling that doesn’t work out mathematically well at all. Now that’s just stupid. We could have had it match pixel for pixel with pro cinema 4K, but nope. See what I mean? UltraHD is stupid. And, it’s not 4K.

But it’s only slightly more idiotic than HD, which fortunately wasn’t ever named 2K…because it never was. DCI 2K is 2048×1080, of which 1998×1080 is shown for an aspect ratio of 1.85:1, which is the theatrical standard for “flat” features and has been for decades. So consumer HD has to be 1920×1080 for 1.78:1, and we can get that without scaling, just a 4% horizontal crop. I admit, that’s’ not enough for anyone to object to…but why? We had our choice at one point. We could have picked any aspect ratio we wanted for HDTV, even (gasp!) a pre-existing standard like 1.85:1. We could have picked something that matched what the Pros were shooting, pixel for pixel. But no, we had to go and deliberately make it slightly “wrong”.

And now, the legacy of that decision makes 4K scaling to UltraHD a real pain, or a crop of just sort of 6%. Come now. And, if the pattern is continued, we’ll have 8K cropped too. When will it end? Sorry, it won’t.

See, Movie engineers and TV engineers hate each other, apparently so much so, they don’t want to share their toys and play nice. The weird part is, the industry standards group is the Society of Motion Picture and Television Engineers. Boy, those meetings must be a riot. I don’t know if I should visualize two groups taking turns arguing with each other, or two groups on opposite sides of a high-school cafeteria food fight. Anyway, they didn’t play well together, so they each went off and did whatever they wanted, so we have this incompatibility thing. Sorry, I really have no sympathy…”they” had their chance for a global aspect ratio and resolution standard, they blew it. The only argument I’ve ever heard for HD’s silly 1.78:1 aspect ratio was that it provided less pillar-boxing of a 4:3 SD image. And how much 4:3 do you see today? Yet we’re stuck with 1.78:1, and thus it has its claws into 4K/UltraHD, and will (unless we change it) continue to plague all new resolutions of the same aspect ratio miss-match.

So, while I still think UltraHD is a stupid name, and the resolution choices are even more ridiculous, we’re stuck with it, and might as well call it what it is. And that ISN’T 4K! It’s UltraHD. So I’m changing my stand on UltraHD, and going ahead and using it, if for no other reason than out of respect for the superior, pre-existing, well-established, real industry standard of 4K.

Just as a footnote, and to reiterate, the standards for UltraHD have not been finalized. So you can buy a so-called UltraHD TV now, but once the issues of color space etc. have been codified, it may not match the standards. UltraHD today is to be thought of as 1080p HD scaled up to an UltraHD panel, and that’s about all.



Playing HD Audio Files in Real Life

This is a follow-up post on the topic of HD Audio, the first being posted on June 3, 2014.

HD Audio, for the purposes of these posts, is defined as higher than CD bit depth and bit rate, and no compression is assumed. So anything higher than 16/44 (16 bits, 44.1KHz sampling rate) qualifies, and it’s a “higher the better” game. The typical HD Audio file you can download is 24/96, but there are some as high as 24/192. The file formats are .wav, .aif, ALAC/Apple Lossless, and .flac.

How do you get to play those high res audio files now that you have one? Recall that the whole point of HD Audio is unencumbered frequency response up into the ultrasonic range, low noise floor, and lower distortion.

Fundamentally the goal, then, is to get a high-rate audio file, say 24/96, to play in your system without degradation, without re-sampling that data to any other rate or bit depth, and get the resulting audio all the way to your ears without any additional limits to high frequency response, noise floor, or distortion.

Taking parts of the signal chain one at a time, speakers are mostly analog, passive devices, so there’s not much of a noise issue here with the speaker itself. Not too many speakers have much response up into the 48KHz range, or even the high 20KHz range, and some that do aren’t even spec’ed into that range, so it’s a bit of a question, and quite hard to verify. Sure, you can drive a 40KHz test tone into your speakers at low levels and try to measure it, but even my Earthworks M30 measurement mic is only guaranteed flat to 30KHz. I’m sure there’s some output at 40KHz, but not sure about how much roll off there is, so if I were to measure something at 40KHz, or 48KHz, I have little idea of what level (or loss) is the speaker or microphone. Pretty tricky stuff up there in the ultrasonics, and that energy is very directional, what with a wavelength at 48KHz being .278″, which makes anything measuring .07″ a significant acoustic element. The noise floor for speakers is really determined by the driving electronics and the acoustic noise in the room, and while no power amp has a true 24 bit dynamic range, most falling 30dB to 40dB short of it, it’s not really the limiting factor here. There is no room anywhere that is quiet enough for a real 24bit dynamic range (144dB), which would literally extend from the threshold of pain to the threshold of hearing. A listening room will have a noise floor around NC20, if it’s quiet. NC figures include weighted curves that compensate for progressively reduced low frequency hearing ability, and thus an NC20 room can have real noise above 50dB SPL at 63Hz. NC figures make room noise look good. Fortunately, music doesn’t actually require that kind of dynamic range, and any music recording will fit (mostly) within the bounds of a decently quiet listening room with a system that has the ability to reproduce somewhere around 105dB SPL. And that’s a lot of them. But none come even close to 24 bits of dynamic range capability. In reality, rooms dynamic range is more like 14 bits, if you consider a maximum SPL of 105dB and a noise floor at 20dB SPL.

One more speaker aspect to be aware of. They generate the highest amount of distortion of any element in the system. It’s unusual to find speakers that perform with under 1% THD at all levels and frequencies, which sort of disqualifies them as a pure-pass element for 24 bit recordings, or frankly 16 bit recordings. But they are what they are, and we have to have them.

The amps driving the speakers can be separates or an AVR, both can work, and the demands are no greater for high-res audio files than anything else, especially considering the room and speakers. A typical amp has an optimistic 18 bit dynamic range and having an amp that passes 48KHz well isn’t a big deal unless it’s some form of digital amplifier.

So far we’re limited to 18/96 by the amp, 14/96 by the room and then there’s the speaker with an unknown top end, and fairly high distortion.

Now it gets interesting. If you have a home theater system, you’re fortunate enough to have 5.1 channels. Most HD audio is two channel stereo, but there is are a growing number of surround versions. We’ll come back to this, but for now, let’s say you have a two-channel stereo file. That’s fine, we’ll just play it in two-channel stereo.

Next challenge is to get those HD audio files turned back into analog audio, and to do that you’ll need a DAC (Digital to Analog Converter). A stereo DAC capable of playing those file can be had for anywhere from $149 to $3500 and higher. They usually take the form of a device that you connect to a computer and play out through USB. They’re fine but if you’re already the owner of a recent home theater AVR you might already have what you need. Many products, for example AVRs from Denon, make their on-board DACs available to play audio files either streamed to them via a network or DLNA server, or played directly from a USB memory stick. Some units even can play 24/192 files.

There are lots of ways to get HD audio out of your computer and over your network to your AVR including various player software, NAS devices, etc. We’ll leave the somewhat weird area of which playback method sounds the best to those that obsess about that sort of thing for now.

If your AVR has room EQ or auto calibration, we have a little conundrum here. We have an HD file, at 24/96, but many AVRs run room correction/auto-cal at 24/48. There’s a reason for this, actually several. First, it takes quite a few processor clock cycles to generate the calibration filters for all 6 or 8 channels, and if we double the sampling frequency, we double the processor overhead required to do that. So to keep at least some DSP available for other things, like surround decoding, etc., AVRs usually process for auto-cal at 24/48. There’s also a bigger question. If we did let an auto cal system work at 96KHz, what would it do above 20KHz? There’s no directly audible information there, so what would the filters do? My guess is that we’d just want them to pass ultrasonics, flat, but there’s an opportunity here to equalize the ultrasonically anemic speaker. The problem is, equalize to what target curve? Lots of questions, no answers, so the folks at Audyssey, as one example, just don’t do it.

There are, however, room EQ systems that do operate upward of 48KHz. The miniDSP line is first to mind, with processors that have an internal 24/96 or better structure. The devices themselves are economical, but using them is not. If you have an AVR, there’s no way to insert them into your signal chain, so you’ll be adding external power amps if your AVR has pre-amp outputs. And, setup is a very manual process for which you’ll need computer software and a measurement mic, and a boatload of patience. Not to say the results aren’t good, it’s just a rougher road to get there. Most won’t adopt external DSP just to get room correction up to 24/96.

That means you have a tough choice to make: Either you hear your file without modification all the way to your speakers and give up room calibration, or you keep your room calibration (which is definitely audible in frequencies bands that include the original music) and accept the down-sampled version of your track. This is just the kind of choice that will drive some people nuts. They really want to hear all 24 bits at 96KHz, so they choose killing their AVR’s auto-cal with a “Pure-Direct” mode. That’s fine, if your speakers and room are pretty good to begin with. The way to check is to compare, as best you can, what your system sounds like with and without calibration. If there’s a radical difference, and auto-cal improves things a lot, your choice has just been made: leave the auto-cal in, and take the down-sampling that goes along with it, you’ll have the better sound quality, and a better experience. If, however, auto-cal makes less difference, or very little, then you have a very good room and excellent speakers indeed, and could effectively live without calibration. You can go Pure-Direct and play away all the way to 24/96 (ignoring the above mentioned speaker, room, and amplifier limits).

Just so we understand, if you did decide to get one of those glossy $3500 DACs and plug it into your AVR, what you’ll have done is been able to play HD audio files at full resolution, convert to analog, shove it all into the AVR which re-digitizes the signal at 24/48, and goes to work processing it. I question the efficacy of such a rig. More logically, you’d want that DAC plugged into a similarly glossy and expensive preamp and power amp combination. Unfortunately, this grouping will probably limit you to 2-channel stereo. It’s unfortunate because the difference between 2-channel stereo and 5.1 channel surround is unmistakable, whereas the improvement in sound quality from HD Audio isn’t nearly so obvious to every listener.

Perhaps the best way to get unadulterated 24/96 to your eardrums is via a dedicated DAC and headphones. You still have the problem of knowing the total response of the system, but your chances of getting all that low noise, low distortion, ultrasonic energy to your ears without changing it much are higher when you don’t have speakers, rooms and room correction to deal with. Not to say that headphone EQ is invalid, it’s actually fantastic, but you end up with the same problems. The best of Headphone EQ is not likely to pass ultrasonics either. So take your pick. Great headphones with low distortion and ultrasonic frequency response are available, some are not terribly expensive. And the even mildly exotic USB DACs have capable headphone amps built in.

We are left to ponder this question: With the limited dynamic range, speaker distortion and limited high frequency response of my system, will I hear better sound with HD Audio files? That is the big question. Some will answer absolutely in the affirmative, some will be more undecided. You won’t know until you try some. If you’ve read my previous post on HD Audio, you’re a big step closer to getting some real HD Audio material to try rather than some up-sampled or re-digitized analog material encapsulated into an HD Audio file. Those that original true HD Audio material are driven by a philosophy that pushes them toward high quality recordings, done carefully and with excellent equipment. Those recordings would be good regardless, but it seems that the motivation of producing some real native HD Audio material does push the recording engineer to do his best, and there’s no debating the audible benefit of that. Perhaps that’s the first step to appreciating high resolution audio.

This just in: If you read my Denon Teaser post, you know there are new AVRs about to be released, and we have high hopes for some very cool new features. Among those rumored are 32 bit, 192KHz DACs and internal processing. There simply are no details to be had, no idea if its true…or why anyone would want 32 bit DACs (that’s a theoretical 192dB dynamic range…which if you could reproduce it, would a dynamic range from the threshold of hearing to the clipping point of air, where the rarifaction peak becomes a total vacuum, and hearing is permanently destroyed in a millisecond). There may be a point to 32 bits for internal DSP functions…but who knows. As a Denon dealer, I’m watching this closely and will confirm or debunk the rumor as soon as I have hard data.


Denon AVR Teaser

Aren’t rumors fun? We’re taking a break from hour HD Audio discussion to bring you this tidbit.

Denon’s new line of AVRs just premiered here, and we are expecting the top end models to be released in the next two months. However, from France comes an announcement/rumor of some of the new capabilities of these yet mythical AVRS, including these:

7.2 channel Dolby Atmos processing, with 13.2 channel pre-outs on the X4100
9.2 channel Dolby Atmos processing on the X5200 and X7200 with 11 speaker outs and 13.2 pre outs.
32bit DACs capable of 192KHz sampling rates
150Wpc for 11 speaker outputs on the new Flagship model X7200.

Now, these are the European releases, may not be identical in the US versions.

However, it wouldn’t be a bad idea to start hunting for places to put more speakers. Looks like it’s going that way, if even part of this rumor is true.

High Resolution Audio – and how to get it

What defines High Definition Audio?

Thanks to Neil Young, the world is more aware of HD audio than ever before. His Kickstarter project, Pono, brings a little player to us capable of dealing with 24/96 audio files just fine, thanks.

If we were to zoom out for a wide establishing shot of HD Audio, what would we see? Well, the driving force behind high resolution audio is the desire for something better than the CD. Remember the CD? It’s well over 30 years old, and promised “perfect sound forever”. The technology of the time was pushed to the limit with 16 bits at 44.1KHz. In fact, those were optimistic 16 bits. A good part of the original CD catalog came from analog tape masters that were re-released on CD.

So let’s start there, with analog tape. The Germans invented it in the late 1920s, and made machines and paper-based tape in the 1930s. We’ve had it since the late 1940s when Major Jack Mullin sent two German Magnetophon Tonbandgerät machines back home along with 50 reels of German tape, then set about to re-engineer the whole mess into a working product. After a demo of one of the early Ampex prototypes, Bing Crosby got excited and threw the 6 man company a $50,000 order for machines, and professional tape recording was born. The machines were mono, full track (recording a single track across the full width of 1/4″ tape) and had their issues. So did the tape. But in just a few years we had stereo machines that could almost make it to 20KHz, with a signal to noise ratio of somewhere mid 50dB. That’s 3% THD to noise, or total dynamic range of 10 “optimistic” bits with an equivalent frequency response of a digital system sampled at 38Khz.

Later improvements in tape, heads, and electronics pushed frequency response to 20KHz, perhaps a few KHz more, and pushed distortion and noise downward a bit, but it never really got past about 72dB overload to noise, or about 11 bits at 40KHz. But it wasn’t a “flat” response, there were roll-offs at the extremes, and various bumps and wobbles, anything but ruler-flat. Response was spec’ed at +/- 3dB, after all. The Chicago-based Magnecord company advertised special head “pole pieces”, an add-on that would extend the response of their “professional” machine all the way to 15KHz.

The masters made with analog tape machines were transferred to lacquer, then pressed into vinyl, so what we got on records made from tape masters could never be better than the tape, and typically was a few copy generations away from the original, each copy adding 6dB of noise, and taking away one equivalent bit of dynamic range. What’s a little complicated about this is that the maximum level that could be recorded on tape was dependent on frequency, and as you got to higher levels, intermodulation distortion came up fast, so in practice it was touchy to get really hot levels on tape until better tape came along. “Better” meant the saturation point, that’s where tape becomes highly non-linear, was raised several dB. And the maximum level that can be recorded on lacquer is also highly frequency dependent, an overlay, if you will, on top of analog tape’s limitations.

So we had tape at its best at 12 bits (if we ignore distortion) with each copy reducing the s/n ratio by 1 bit, releases on records at their best at 9 or 10 bits of equivalent dynamic range. We had distortion figures at high levels that went well into the single digits all the time, and we had speed variations that translated to wow an flutter. All in all, not exactly a duplicate of the output of the console. In fact, even with the application of Dolby A-type and later SR noise reduction, it wasn’t a very difficult task to tell which was tape and which was live (sorry, Memorex).

Enter digital audio recording. Suddenly we had 16 bits at 44.1KHz. That means flat response to 22KHz throughout the entire frequency and dynamic range, and a theoretical 96dB dynamic range (pre-dither). Compared to analog, it’s easy to see why “perfect sound forever” was the mantra. And, since the copy or duplication process was lossless, each CD was a bit-perfect copy of the master. In fact, during recording sessions, many recording engineers were fooled into thinking they were hearing the console output when they’d accidentally left the monitor selector in the tape-return position and were hearing a complete digital A/D > D/A conversion chain.

So why do we need better than that? Good question, one which this post won’t even try to address, there are some solid reasons, and as usual some not-so-solid ones. But some thought we did, so after a few years we had digital recording systems that would do 48KHz, 50KHz, and a smattering of other sampling frequencies and data structures. This causes a bit of a problem, though, since if you wanted to actually sell your records it all had to end up at 16/44.1 somehow, and sample-rate conversion added 3dB of noise, minimum. So most of the first two decades saw recordings done at 16 bits, 44.1Khz. [Why 44.1KHz? Early digital recording methods used video tape recorders to handle the data. NTSC video recorders, when modified for monochrome video, ended up at a frame rate of 30 fps and a field rate of 60 fields per second. With 525 lines per frame, the most convenient even multiple for a 16 bit stereo data within that frame ended up at 44.1KHz. Consumer systems that recorded on home VCRs made to record color video had frame rates of 29.97 frames per second, which warped the sampling rate down to 44.056KHz, which resulted in a slight but imperceptible upward pitch bend when transferring those recordings to a commercial CD.]

Then two things happened: the Internet, and the computer. As computers got faster, storage got larger, and the Internet propagated, the doors were open to distribute audio in new ways. The 16/44.1 lock-hold was loosening. Then (compressing another 10 years into a line of text) we had DVDs and SACDs, both capable of high-rate audio. So we could, if we wanted to, record at some other sampling rate and bit depth, and still sell the result and have it playable at home.

Now we have high resolution audio downloads of everything from freshly recorded music at native 24/96 and even higher, to resampled old digital masters to “remastered” analog tapes. So now the question we need to ask is, “What does it take to get higher quality music?” Starting with newly digitized analog masters, we can answer that no high-res straight digital transfer of an analog master can improve on the analog master. We have to say “straight transfer” to eliminate digital processing like noise reduction as one example. All a good digital transfer can do is copy the analog tape faithfully, and in doing so copy its flaws. And it turns out, digitizing an analog tape isn’t very difficult to do even at 16/44.

Then we have up-sampling 16/44 masters. Taking that one step at a time, if we take 16 bit data and try to make 24 bit data out of it, we don’t really do much. Assuming we match full-scale levels, we end up with an exact copy of the 16 bit data with 8 bits worth of noise below it. If we up-sample 44.1KHz sampled data, we have to “create” new words between old ones. We can do that by simple interpolation, and get the data, but we haven’t added anything. What we end up with is a faithful copy at a higher bit rate and depth that won’t sound any different because there’s no new data.

Lastly we have recordings that originated at higher resolutions like 24/96 or higher. It’s in that etherial zone that we start to see actual improvements in resolution when recording an acoustic environment, assuming the recording engineer used high quality microphones, mic preamps, and A/D converters (actually, a really big assumption since the common 24 bit A/D converter has a real noise floor at 20 bits). Its in this area that we must focus our attention if the high-res experience is to be realized at our eardrums.

Clearly, the weakest link is getting the original acoustic environment recorded at true high-res. And for the consumer, the challenge for buying high-res is the verification that the expensive files we purchase and download have had their entire production chain optimized to maintain the original resolution. That’s not always an easy question to answer, but it does clearly eliminate up-sampled digital masters and digitized analog masters. Oh, and it eliminates vinyl, sorry. As much fun as vinyl is (and I do actually play it and enjoy some of it), it’s not high resolution by any definition.

OK, we have ourselves a bit of real high-res audio in a file, now what? Hang on, and keep watching this blog. We’ll talk about what to do to get that high-res audio to your ears in a future post.

First New Denon AVRs for 2014 are out this month

The first batch of new Denon AVRs is released!

Been holding your breath for these? Me too! Last year’s offerings from Denon were really great, and remain so, but the first three of their new S-series have added a few things…nice things…that we’ve wanted for a while, and some that we will use perhaps in the future…perhaps not.

All three have:

HDMI 2.0
4K Ultra HD 60 (something we may not have for a while if ever)
4-4-4 color sub-sampling pass through (nice, if you could find a source)
Built-in Bluetooth – sweet!

Models Quick Take:
AVR-S900W $649.99 MSRP
7.2 (two sub outs) 90Wpc
Built in WiFi
High-res audio formats (24/192 Flac/Wav)
8 HDMI ins, 2 outs

AVR-S700 $499.99 MSRP
7.2 (two sub outs) 75Wpc
Built-in WiFi
High-res audio formats (24/192 Flac/Wav)
6 HDMI ins, 1 out

AVR-S500BT $299.99 MSRP
5.2 (two sub outs) 75Wpc
5 HDMI ins, 1 out

These will all be available this month, as are deals on the out-going models. Call us for details!  And, more new AVRs are due out in July and August…watch this space.

A full comparison chart is here, as a pdf. You can build your own on the Denon web site, of course.