Computer Nightmares

I’m writing this on my “trusty” 15″ MacBook Pro. It’s the first thing I’m writing on it since a major repair: a new logic board. All together, my MBP has been down for over a month, and it’s my primary machine for, well, everything. See, my wife and I both have 2011 MacBook Pros, and they are identical in every way, including how they both failed. If you google a bit, you’ll find that these computers have a manufacturing defect, their AGP video “card” (really, a chip), is soldered to the logic board with lead-free solder, which degrades, cracks, grows feathery crystals that short between contacts, in all, over time most of them fail catastrophically rendering the logic board pretty much artwork.

There’s a class-action lawsuit filed against Apple about this, as repairs run between $500 and $1200, on a piece of computer hardware that cost nearly $2000 new. Yes, I’ve joined the suit. But I also fixed my own computers.

Since my wife’s MacBook failed first, I pulled her logic board out and sent it away to be rebuilt/exchanged. During the time it was away, I put her hard drive into my MacBook and she continued her work as if nothing was wrong. I, unfortunately, did not. My back-up laptop is a Windows XP machine. Yes, I still have some hair left on my head.

Just about the time her logic board came back and I got your MacBook up and running, and mine running again, mine failed too. I had it for about 3 days, then off it’s logic board went for the same repairs, and I just got it back and working. It’s been a long month!

During that time I discovered a few things worth sharing, even with those reading this with a primary interest in Home Theater. You don’t really know how much or little you have backed-up until you loose your computer completely. In my case, I ran backups using an external drive and Apple’s Time Machine. So even if my computer never ran again, I could do a complete restore to a new unit. Great, but what do I do if I don’t want a complete restore, don’t have a new computer, and just need some files? Well, that backup is useless. I pulled my main HDD out of the MacBook Pro, and mounted it up in an external drive bay, and mounted it on my Mac Pro so I could access some files, but that didn’t get me my full email because my MacPro is older, doesn’t run the current OS version, and my mail files wouldn’t import.

For email, I used my iPad, which I love/hate for many reasons. First, it’s my development platform for our Platinum Control system, so it can’t just wander off with me. Second, typing on the screen….ugh. Never do well with it, even with autocorrect. My USB keyboard is also tiny, and disables autocorrect, so other than good for quick not-taking, it’s really limited.

Then there’s applications. Never realized exactly how many I use that don’t exist on our other machines! Makes for some interesting license swapping, to say the least.

Then there’s my iTunes library. It’s about 140gig, not exactly “portable”, and while the MacBook Pro was down, it was anchored to the Mac Pro, which IS an anchor. Sorry, cloud backup for 140Gig just ain’t in my world.

My backup strategy is still developing now. I’ll still do the usual routine TimeMachine backups, but I’m also keeping a copy of other critical files, license keys, emails, etc, on another drive (formatted NTSF, by the way), and that will hopefully make access easier next time. I’m also working on automatically doing that because I’ll never remember on my own. And, finally, I’m installing critical apps and tools on more than just my laptop so I can deal with things on other hardware, and hopefully, not have to EVER use webmail again!

Next move is an upgrade. I’m doing an SSD and shared HDD in the optical drive bay, and attempting triple-boot, OSX (latest version) Win 7, and OSX 10.7.5, so I can still use older software every so often. You can’t triple boot with Bootcamp alone, so this should be interesting!

Wireless HD Audio streaming: 24/192 over WiFi!

Every so often I trip over a really cool product to add to our line, and today was one of those trips.

It’s called Voco. It streams audio. But that’s just the beginning, it does so much more! For example, the holy grail of home audio streaming has been HD audio, content at 24/96 or higher. The Apple solutions we’ve been using are great, but down-sample to 16/44.1 or 16/48. Many other solutions are around, many are kind of half-baked, or not flexible. Many of our Denon AVRs, for example, can look out at a DLNA sever and grab 24/96 files and play them…but they’ll be down-sampled to 24/48 unless you turn Audyssey off, a choice I’m personally not prepared to make.

Today I found Voco. Not only will their products stream audio files from pretty much any sort of library, from iTunes, to DLNA, to a NAS device, even in the cloud, but one of their product can happily play HD audio files up to 24/192KHz! Now that may not be a first-ever, but what’s great is it’s something with a real front end, a user interface that works, is easy to use, and integrates very well.  And Internet Radio, Pandora, even YouTube (video too!) is supported.  You can navigate the system with an app that includes voice commands, and each Voco device can become a WiFi hot spot, filling out your home WiFi coverage, if necessary.

I’ll be writing more about Voco soon, but for now, it’s a first step to a wireless, distributed high resolution audio system with centralized storage. And they’ve thrown a lot of extra functionality in to boot.  How great is that?

I love to say “Yes, we can!”  We’ll be installing Voco systems soon.

HI-Res Audio…it’s about definitions

A recent post by Dr. AIX, our favorite Hi-Res Audio Evangelizer, has motivated this post.  To see where this all came from, go here, then here, then here. Then, please come back.  I thought that perhaps blogging in numbers might get some industry attention.  Ok, I’m not that naive, but I humbly submit this, what some may term a diatribe, by way of support.  Can there be a humble diatribe? Ummm….

Years ago I learned a principle of marketing that stated simply that for a new product to succeed and rapidly penetrate the market it must offer at least a 3-fold perceived improvement (and preferably 5-fold) over the product it replaces. If there is no predecessor, the job is easy. But if there is an existing product, it gets down to what the new product offers of value over the old, and at what cost.

When the CD was introduced it offered a list of “improvements” like smaller size (only a partial plus, it’s a minus for art work), longer play time, more resistance to wear, easily and quickly accessed tracks, and of course, “perfect sound forever”, thank you. The price point was initially 1.5X – 2X of vinyl records, with players entering the market > $1000, dropping to half that in about a year. The CD achieved market penetration faster than anticipated, eclipsed and replaced several existing analog systems, and with a few exceptions on the edges of the bell curve, was well received in the general market.

Now comes Hi-Res audio. What is the predecessor? The biggie now is AAC and .MP3 on-line purchase files, as CD sales have collapsed. What does HR offer? Well, higher than CD quality is the premise, but it often doesn’t achieve that because of how HR is defined. We’ll come back to that little bugaboo. It certainly should beat most flavors of .MP3, and likely most AAC as well, even if the HR file doesn’t beat a CD. But what else? Where’s the 3-fold improvement?

Is it more convenient than current on-line purchases? No, but it’s about the same as CD when buying physical media HR audio, and the process of on-line purchase of file downloads is a little more trouble because, we’ll it ain’t iTunes.  Most of the lack of convenience comes during playback.

Is playback at least as easy? No, for several reasons, mostly it doesn’t play on portable devices…at least not the most popular ones, though that could change. It seems like it’s not on Apple’s close-range radar, and that’s the big share of on-line music purchases. iTunes won’t play HR natively at least not all the way to the output jack, and the fixes to make it work also imply HR hardware too. So no, playback actually is more difficult.

Will it take and investment in hardware and possibly software to handle the files? Mostly yes, though with physical media we do have disc players that can handle it. Disc players. Yes, I remember those. Been a while since I actually put a music disc in a player…you?

How about the immediate gratification of an online purchase? HR is by nature larger and therefore slower to download. The wait isn’t interminable, but probably taxes today’s youthful short attention spans.

So as you can see, the 3-fold perceived improvement is really down to sound quality. And that means, if there is any hope of HR Audio’s success in the marketplace, that quality has to be carefully defined so that it is easily and reliably heard and so becomes the key improvement. And a promise of that quality should be easily discerned by the consumer, then his expectations should be reliably fulfilled.  Am I using “reliably” three dimes in the same ‘graph?  Yup, I am!  The quality improvement is something that must be relied upon, or we don’t have a product.

The JAS Hi-Res Audio logo is apparently an attempt at such an indicator. Yet the qualities that stand behind that seal have not been clearly defined. Basically, we’re only dealing with two things: sampling rate and bit depth, and their results on audio quality. Yet, while the JAS/Sony spec does deal with rate and depth, it  doesn’t adequately specify the results of either, and in doing so opens the qualification of HR Audio to just about any uncompressed file sampled at 96KHz or higher, regardless of the origination of the actual audio material, and without regard to real bit depth (as apposed to just a 24 bit word).

Not only does this sort of thing accomplish nothing in terms of advancing HR audio in the market, it actually goes the other way. It in effect ensures that consumers expectations of improved audio will routinely be unfulfilled, the logo will eventually have no meaning, and the additional cost and trouble of the format will assure its failure.

What’s needed is a clear differentiation, an “always-gotta-be” improvement in quality. If we’re writing specifications that permit recordings qualify for a Good Housekeeping Seal of Approval, then those specs had better actually do that, or the seal is worse than meaningless, it becomes a misleading mockery. And, if anybody who can generate a 24/96 file qualifies for that seal, regardless of actual audio quality, we haven’t done any good either.

When we specify a frequency response, there are two parameters: a frequency range (20-40KHz) and a level variation tolerance (+/- 3dB).  Take the last parameter away (lake the JAS specs) and you don’t have a frequency response because you haven’t defined level.  Any little bit of 40KHz, even a signal 50dB down, could meet their so-called specs.  I would advocate complete specifications of any parameter.

So not only do we need to work on specs for frequency response and noise, we need to put more than a little emphasis on dynamic range. A 24/96 file that has been deliberately peak-limited, clipped and crunched, bent, spindled and mutilated is still no better than the same recording in 128Kbps AAC. In fact, a popular independent artist recently released some of his catalog in “High quality audio” files, and no doubt sold a few. Of course, they were identical in every audible way to the original not-so-high-quality files, so what was the point?

The advantage of HR audio has to be clear, easily heard, and worth the trouble and cost. It literally must be a Gold Standard against which every other form of audio can be compared. Recalling also that “content is king”, the HR material must be very main-stream. If listeners can hear their favorite artists in HR, wide bandwidth, ultrasonically extended, and dynamically un-processed form, they’ll soon despise the other forms, and we could have a positive trend.  This will be one of the harder things to do, of course, because the main stream is traditionally targeted elsewhere.

If the efforts to define what HR Audio is are vague or misleading, HR will remain a boutique format doomed to niche markets, and cause further  consumer confusion by logos that are worse than meaningless, they’re misleading.

Who Runs Your Speaker Wire?

On a recent job for a long-time friend and associate, I encountered a “pre-wire” job done by someone else. At the equipment there were two holes in the wall, one at wall-switch level with a fist-full of Cat5 wire pouring out, the other at outlet level with more Cat5 and a wad of 16/2 speaker wire. The job was to get the distributed audio system working in 4 zones, a bar area, a living room, a sun room, and pair outside by the deck. The indoor speakers were already installed and connected, the outdoor speakers were not. After identifying each wire for the indoor speakers, I stepped outside to look for the wiring for the deck speakers. Nothing to be found, no pigtail, cover plate, access hole, mark, tape, nothing. Hmmm. The wire tracer easily located the entire strip of aluminum siding that had been installed over both speaker wires.

Questions! Turns out, the guy who installed the wiring was the carpenter, and he did so based on the third-hand advise of a an AV tech person who works at a high school. We were able to get the carpenter on the phone, and he described the approximate location of the outdoor wiring behind the siding. He was, of course, incorrect. It took the endoscope camera, a couple of small holes and a long surgical forceps to pull the wires out, then the holes had to be sealed, speakers mounted, etc. That’s a very short and easy description of a very long job.

I asked the carpenter about all that Cat5. The home has no network, no internet, no plans for either. He said that was recommended by the school AV tech “for iPod control”. I rolled it all up, zip-tied it into a bundle, and stuffed it back into the wall. We don’t need Cat5 for iPod control. Nobody does, or ever did.

The wire chosen for the speakers, as mentioned, was 16/2, but it has the distinction of being the most fragile wire I’ve ever seen. It was pretty much impossible to strip without nicking the wire, exposing bare wire were you really don’t want it. And there was enough excess at one outdoor speaker to get about 2″ outside the siding, so a splice had to be made, and the other outdoor wire was easily 10′ long in a big loop, the free end of which couldn’t be pulled through the hole, so was just cut off.

So, who do you want to install your low voltage wiring? A carpenter guided by a school A/ V guy was probably not the best choice. In fact, neither would a plumber, nor an electrician. Surprised at that last one? Unless your electrician specifically is skilled in low-voltage wiring, he’s the wrong guy. Different wire, different installation requirements, different tools to some extent…different guy. And the guy installing the wire should probably be working from plans, something thought out on paper, if only a pencil sketch.

Of course, this all leads to today’s shameless plug for us, the low-voltage guys. It’s what we do, and specialize in. We don’t build walls, put up dry wall, bend conduit, or install light switches (we do have contractors we work with that do all of that of course), but we do install phone wiring, network, speakers and distributed audio, satellite and TV wiring, door bells and door cameras, security cameras, and more. We work from plans, know which wire to use, and where to pull it. We even take steps to make sure our wiring can’t be damaged by the other trades doing their jobs, and we test it after installation.

Might be worth a phone call for even your smaller jobs, just to see what we can offer.

How many fractions-of-a-channel do you have?

5.1, 7.1, 9.2, 10.2, 7.3?  Where are all those 0.x channels coming from?

At one time there where no 5.1 channel systems, not on film, not in the pro or consumer world. That was before 1987. Before then there was a bit of confusion as to how many channels a film soundtrack could/should have. On stereo optical film there were two real channels, with matrix processing expanded to 4 plus a subwoofer derived from the others. On 70mm magnetic film there were six tracks, which could be deployed in several ways, one of which was 5 screen channels and one surround, another was 3 screen, two surround and one LFE.  And that meant your theater had to be reconfigured for whatever film was being shown, something some theaters would do, but pretty much isn’t going to happen in any home.

All that had to end, though, for many very practical reasons. So, at a SMTPE subcommittee meeting in October, 1987, where varyous channel counts and plans where being knocked around, Tom Holman proclaims they needed “5.1”. And everyone looked stunned. Huh?

5.1 is actually a bit of what Holman terms “marketplace rounding error”, because the LFE channel is actually .005 of a main channels sample rate, but 5.005 just doesn’t have the same ring as 5.1

How do you take a marketplace rounding error and give it a life of its own? Easy, you just do. The .1 came to also mean not just the LFE channel of a soundtrack, but also a subwoofer in your HT system. So, 5.1 would be 5 mains, one sub, 5.2 means 5 mains and 2 subs. 7.3 is 7 mains 3 subs… and so on. Thus, the original marketplace rounding error lives and grows larger with each sub. The LFE channel remains a single audio channel at .005 of the sampling rate of a full range channel (240Hz sampling for a 120Hz bandwidth vs 48KHz for a 24KHz bandwidth), even in today’s high resolution soundtracks. It doesn’t become a .2 or a .3, really, ever, even if it’s split out to multiple subs.  Yet none add even a Hz to LFE bandwidth. All subs play .005.  So all systems should be referred to as x.1 systems, regardless of how many subs you have.

But we all know that’s pretty  much not going to happen, because we need a way to gloat over our many-subbed home theaters in a single forceful term. “I got me a 9.4 channel home theater!”

I don’t think there’s been a case of a rounding error multiplied that many times in audio ever before.

Welcome to new spam subscribers!

Just wanted to take a moment to welcome the latest batch of spammers to the Blog.  In case you didn’t get it, we do know when a spammer subscribes, the server logs IP addresses, and I’m the only one with the ability to post here anyway.  When I see a whole batch of new subs with addresses at ambiguous email servers like Outlook or Yahoo, it’s fairly obvious what’s going on.

But welcome anyway, at least you’re a human and not a bot.  So, stick around!  Perhaps you’ll even learn something.

Interesting statistic: our sever has block 689 spammers in the past two months.  Cool that it  works.

Intensity Stereo

(read the first ‘graph hearing the iconic “movie trailer voice”)

VO: In a world where the position of sounds in the stereo soundstage is artificially adjusted. In a world where the entire every recording is an illusion design to suspend disbelief…


Got a favorite stereo recording? Perhaps it’s that early 1970’s vinyl release of a classic album by Santana where guitars fly back and for between speakers. Or do you favor the wild ping-pong panning of a vintage Jimmi Hendrix?  All of these were produced using artificially positioned sounds typically from multi-track recordings.  And though classical recordings are typically far more minimalist, even London Record’s “Phase-4” multitrack recordings of the early 1960s, through the late 1970s were an admittedly abortive attempt at fully artificial “stereo” from 10 to 20 tracks of mono sounds.

The positioning of a sound in stereo is typically called “panning” and done on a mixing desk with the “pan pot”. All a pan pot does is adjust the intensity of a monophonic sound as it is split between the two channels. Center would be equal represented by equal levels in both channels, full left or right would be no signal to the opposite channel, and anything in-between can be dialed in as needed. And the result is sound is perfectly positioned between channels anywhere the engineer desired.

Except that it’s not. All a pan pot does is control level, and in life, the position of a sound source is not that simple. For a give angle of incidence, the relative level at each eardrum is dependent on frequency, with a greater differential at high frequencies, less a low frequencies. Then there’s the difference in arrival time, which, slight as it may seem, has a very large impact on perceived position of a source. The maximum time difference between our ears is around 640uS, and obviously changes with angle of incidence. So sensitive is our hearing mechanism to interaural delay that using headphones, nearly complete panning can be achieved with delay alone.

So the total picture involves time delay, and frequency dependent level differential, which has traditionally been so difficult to do on an analog console that it just wasn’t done. Even today with all the DSP anyone could dream of, every pan pot in the world is an intensity control only.

As you might imagine, if we extend our soundstage to 360 degrees, the real world panning algorithm becomes just a bit complex. The surround panner joystick is therefore also a level control only. And, it turns out, when played on speakers, stereo works pretty well with level only panning, which we’ll properly call “intensity stereo”, because sound position depends only on relative intensity only. It works because we’re playing stereo on two widely spaced speakers which introduce a pretty fair scrambling of the interaural delay difference because each ear hears both speakers with a relative time and intensity difference applied in the room that effectively swamps out the subtle inter aural differentials we may have heard were we in the original sound space.

The panning question is different in surround, with 5.1 speakers as a starting minimum. Remember my earlier post reference to the Bell Labs stereo experiments in the 1930s? They determined the absolute minimum channel count for good stereo sound positioning was three, Left, Center, and Right. The more speakers you have, the more accurate sound source positioning can be. Bell Labs concluded that the ultimate array would be hundreds or thousands of speakers on a huge wire frame grid. So, as we increase our channel count, intensity-only panning is all that is necessary, as the source is then located in physical space rather than virtualized by faking an intensity or timing difference between two speakers. Ah, that means high channel count music recordings have a more realistic soundstage! Yes, that is in fact the truth.

Going the other way, headphone stereo is the most sensitive to both intensity and timing differences. With just a little timing difference between channels, equal intensity signals to both ears can seem to pan nearly completely to left or right based on timing alone. This would partially explain why binaural recordings seem so real in headphones, but not so good on speakers. Binaural capture includes both frequency dependent intensity differences as well as time delay differences. In fact the intensity differences relatively small from the mid-band and below.

What all of this means to us home theater or stereo-only enthusiasts is, if we can get our hands on real multi-channel recordings for our 5.1 channel systems, the effect can be very palpable, and much more defined and less ambiguous that simple stereo. For stereo, if we can possibly reduce as many stray timing errors (reflections) as possible, our soundstage will contain at least some depth.

My earlier posts about the coming Dolby Atmos AVRs mentioned that system will add height to the equation, and do so by adding speakers either physically high or reflected off the ceiling from lower (more practical) positions.  And that will also be a very good thing.

When it comes to palpable sound positioning, I’ll very loosely paraphrase a cynical line Harrison Ford spoke in “Six Days, Seven Nights”, “If you want a sound there, you have to put a speaker there”.  If you don’t, the chances of locking a source to a position are pretty much nil outside of a head-locked sweet spot.

Why Bi(amp/wire) Speakers?

Pretty often you can say that if one is good, two is better. Works for wheels on a bike, earphones, cars if you have a spouse, beers, outboard motors…well, lots of things. So if one speaker wire is good, then two MUST be better, right?

Eh-hem…well…not necessarily. When it comes to wiring speakers, there are two popular, but very different, methods of adding wires to speakers. They are Bi-Wiring, and Bi-Amping.

Bi-Amping is fairly easy to understand, but it two comes in two flavors. The common one is where a two-way speaker has it’s crossover designed in such a way that the tweeter circuit and woofer circuit can be separated at the speaker, then each driven with its own amplifier. The amplifiers receive identical inputs, and can be simply Y corded together. Their outputs are wired to the speaker terminals with separate runs of speaker wired, each attached to its own speaker connection, one for the woofer and woofer crossover, one for the tweeter and tweeter crossover. And there you have it…Bi-Amped speakers.

The second style of Bi-Amping still uses two amplifiers, but they are wired directly to the woofer and tweeter without any passive crossover between them and the driver. The crossover becomes an active device placed just ahead of the amplifier and, hopefully, optimally tuned for the speaker. That’s where it gets a bit tricky. That crossover you might have bypassed was designed specifically for the speaker, so replicating it with an active device may not be quite so straight-forward. You could end up inadvertently re-engineering the speaker with your active crossover with some very different results. More on that later, but that’s the second, somewhat less-common way to bi-amp speakers. It’s not usually done in consumer systems, but is frequently found in pro systems.  In fact, the TMH Tesseract Speakers are tri-amplified with a fully active line-level crossover designed specifically for them.

Bi-wiring also comes in two flavors. The first and most simple is also almost completely pointless. You simply run two lengths of speaker wire from your AVR or amp to your speaker. The two runs are in parallel, and tie together at the amp and again at the speaker. What you have accomplished is effectively doubled the cross-sectional area of the wire, and cut the wire resistance in half. That’s great, if you needed to do that, but you probably don’t. If you’ve used #14 for a short run, or #12 for a longer one, the wire resistance is already low enough, cutting it in half won’t matter. You will also have doubled the wire capacitance, but that’s been completely swamped out by the amp and speaker already, so not real change there. You’ll have cut the wire inductance in half, but again that should, if you have even average cheap wire, already be a non-factor. Overall, if you’re thinking of bi-wiring this way, thing again, and find the next bigger wire gage and just do one run.

The second, and rather odd method of bi-wiring a speaker is if the speaker crossover design permits separate drive for it’s woofer and tweeter, run separate wires from the amp to the tweeter input on the speaker, and the woofer input. They are, of course, tied together at the amp. What does this do? Not much. Proponents seem to think it prevents the tweeter and woofer from interacting in the wire. But the interaction that could occur, would only happen if the wire were of significant resistance. Your speaker wires don’t have high resistance, do they? The Bi-Wire-ers also think this reduces intermodulation distortion. Nope. All IMD happens in the speaker itself, not the wire. IMD is a product of something non-linear in the system. In any sound system the most non-linear element…like by a long-long way…is the speaker. Simply sending separate wires to its woofer and tweeter doesn’t change that even a little. Again, using heavy enough wire in the first place would eliminate any concern.

So long as we’re about it, one more thing to mention. The Bi-Wire Crowd also sometimes likes these little wire supports that keep your wires properly elevated off the floor. Some of these cost over $100 each! There are various stories about how these work. I choose the word “stories” because they don’t qualify as “theories” at all. Some claim laying wire on a floor distorts the electric field around the wire, and therefore the signal they carry must also surely be distortion. The results are the usual claimed improvements in non-quantifiable (or even definable) characteristics like “focus” and “detail”. But what they actually do is… you guessed it…drain your bank account. Your bank account will surely have less focus and detail when they’re done, and that’s really all they do.

Bi-amping has a point if you plan to re-engineer the speaker crossover design. Bi-wiring has no point other than, like the cable lifters, placing a heavy load on your net disposable.

On the other hand, a good professional calibration job does make an measurable, verifiable, and definitely noticeable improvement. Hit our main web site and contact us if you’d like to get the most out of your sound investment with a pro calibration.

Dolby Atmos AVRs are coming….

The list grows…again. This time it’s been bumped up to Denon/Marantz, and Onkyo/Pioneer (yes Onkyo has gobbled up Pioneer).

Most Atmos capable receivers are found at the top of the line, and in many cases (D/M), the new AVRs are not slated for release until fall.  And there’s a new way to specify channels, i.e., 7.1.4…means 7 channels, L,C,R, Sl, Sr, Bl, Br, plus 1 sub (one sub? Really?) and 4 Atmos speakers (positions TBD, but we think front high and overhead somehow).

Pioneer has announced Atmos speakers, which apparently are meant for those who don’t want speakers in their ceilings. How they work is not clear yet. Expect entire lines of Atmos speakers, though.

What will I need for my first dose of Atmos at home? Basically, a few things. An Atmos AVR/Pre-pro, which may actually be the easiest part. Then you need more speakers, probably at least four more, perhaps more if you are at 5.1 now. And lastly, content. Wee now have promises for Atmos content on BD for as early as this fall.

Two cautions: First, if you have already done height and wide speakers for some other process, like Audyssey DSX, you may have to move them, even add to them. If you’re considering that now, hang on a few months and get it right for Atmos. On that commitment, well, we don’t know how prevalent Atmos content will be, but in theaters, there were almost twice the Atmos releases than 7.1 releases in 2013.

Finally, we don’t really know how well Atmos will scale to smaller systems with fewer than 64 or 32 speakers. Trust Dolby to be all over that one, but there will no doubt be compromises.

Lets all adopt the obligatory wait-and-hear attitude.  And I promise to leave Atmos alone for a bit on this blog.

Elevating your Audio – Nowhere to go but UP!

According to research done at Bell Labs in the 1930s, the minimum number of speakers/channels necessary for acceptable stereo soundstage is three, left, center and right. Of course, we had two for stereo, not three, which means by extension that stereo doesn’t do a reasonable job of producing a believable soundstage! Surprised?

But lots has changed, and we have had more than two for quite some time, actually decades if you include the rather bewildering Quadraphonic days of the 1970s. As for music, I heard my first 5.1 music at a demonstration held at USC in the mid 1990s hosted by Tom Holman (Inventor of THX, now holding the title “Audio Direction” at Apple). The demo was spectacular to say the least. A few years later I providers some engineering assistance with another Holman project, the IAMM (International Association for Multichannel Music) conference. A long story for another time, but the demo room had 8 possible channels to play with. At that time Tom advanced that the obvious improvements in sound dimensionality became apparent to all listeners every time the number of channels doubled. Mono to stereo, stereo to 5.1. So his next step was 10.2. I heard that demonstration at Bjorn’s in San Antonio several years later. Bjorn had built a room dedicated to 10.2, with M&K 2510P powered THX Ultra speakers, and M&K subwoofers. Tom’s 10.2 speaker plan included “height” and “wide” channels, and differs slightly from some of the 11.2 plans of today. However, the demo, held in a dark room, included actual 10.2 desecrate recordings. To day, I’ve not heard a more believable sound space reproduced anywhere.

So here we sit in the middle of our 5.1 or 7.1 speaker arrays, happy as clams. Or are we? HT fans are always wanting to advance to the next level, and not there’s nowhere to go but up. A few years ago Audyssey introduced DSX processing that could drive 9.2 or 11.2 speakers. It includes Tom Holman’s “height” and “wide’ channels, but stays with the basic 7.1 plan as its base. DSX is a bit of a fake-out, because it has to be. There is no 11.2 material in the world, so it has to do something to play those speakers, and what it does is create artificial reflections that create the audible illusion of a larger room with high ceilings. It’s a nice effect, very convincing, but there’s really no material on the soundtrack directed specifically up there.

Time marches forward, and Dolby introduced their theatrical Atmos system a few years ago. Rather than deal with a specific channel count, they say it can do “up to 64” speakers, which also implies it can do less if required. Good thing too! That’s more speakers than most theaters would be wiling to install (read: pay for). Atmos doesn’t work like 5.1, where thera are channels directed to speakers. It works by directing sound to locations in an object oriented manner, really a different approach. But it doesn’t get us out of the requirement for more speakers in different places. And, turns out, the biggie is height.

Where do I get me some Atmos? Well, an Atmos equipped theater logically, and there’s a list of them on the Dolby website. But as of this month several home AVR manufacturers have announced new models targeted to the European market that are “Atmos Ready”, or have some reference to Atmos. Those would be Denon/Marantz (sorry, no links yet) and Pioneer…and counting of course. These are big AVRS, they also have up to 11 power amplifiers, so you can at least some extra speakers in unusual places. But that’s only part of the battle, the other is, where to get some media with Atmos tracks. Yeah, that’s a problem right now. It’s not that the material doesn’t exist, of course it does. In fact, the number of films mixed with Atmos in 2013 is nearly twice that mixed in 7.1, so they’re making tracks, we just don’t have home access yet. So the cart is before the egg, the horse is in front of the chicken, whatever, but at least we will soon have Atmos capable AVRs, whatever that actually means.

The old adage of “If you want a sound there, you have to have a speaker there” still works just fine, thanks. But “there” is a bit more of an issue at home, especially if you don’t have a dedicated home theater room. We don’t know much about the home version of Atmos yet, but it’s safe to assume that height channels will be important in front, possibly wide channels, and probably a pair over your head, but it’s all conjecture. And Home Atmos has to be a little smart…it will have the fewest actual speakers to send sound object to, nowhere near the full possible 64, or the 32 or so they put in some theaters. But that’s actually a minor concern for the Dolby Wizards to work out.

Our problem is where to put all those speakers. If you though finding room for 5.1 was hard, and 7.1 challenged your interpersonal relationships, well gird up for 11.2 or more. Just get used to it, it’ll never really stop until we have full holographic audio which, we hope, will be infinite resolution of any virtual point source in a 360 degree sphere, all emanating from some sort of acoustical holographic emitter. But until then, or in case we die first, Atmos may be the new sound wave of the future. However, lets not get ahead of ourselves. All we have is new AVRs that are Atmos Ready, we still need the speakers and speaker plan, and Atmos encoded material to play at least one good demo.

For Atmos, like 3D, time will tell.  I will, of course, settle for the Holodeck 1.0 if/when….