Ring Doorbell Follow-up

My previous post regarding the Ring Doorbell highlighted some of the issues we ran into during testing.  Ring ultimately sent a replacement unit, and after installation and setup, it behaved the same way, same problems, nothing was improved.

The basic problems are motion detection is not as configurable as the app would have you believe.  It’s either hypersensitive, or insensitive.

Response time,..button press to app alert…is so slow that nobody but our very patient mailman will wait for your response.

Recording start is very slow, motion triggered recordings miss the object/person that triggered them.

All in all, we’ve decided not to offer the Ring to our clients.  Other units and solutions are being tested now, some are quite promising.  Check back for updates!

Ring Smart Doorbell Review: Not Ringing Our Bells

rings_small

Review: Ring Doorbell
Not Ringing Our Bells

The Ring is a fantastic idea. A doorbell with a video camera that lets you know when somebody rings your doorbell, lets you see and talk to them, detects motion at your door even when the bell isn’t rung, and records that video for later review. It wires to your existing doorbell, or pairs with its own wireless bell, and has an app so you can answer the bell and view video anywhere you have Internet access.

That’s the idea, and in theory, that would be just wonderful.

I’m constantly watching for products we can carry for our customers that are cool, high value, and really well done. I got a Ring to demo on our home just before we headed out to the late for a couple of weeks. What better opportunity to test it? So I got one at one of our distributors, the original Ring, not the Ring Pro, to test.

The Ring has several distinct features:

1. It’s a conventional doorbell button
2. It’s a video camera with network connectivity
3. It’s got a WiFi card so you don’t have to wire it to your net
4. Motion detection – it detects “heat signatures” (as Ring calls it) and records/alerts you to that motion.
5. There’s a mic and speaker for two-way communication
6. There’s a built in battery in case you can’t hard-wire
7. The IOS app

I was able to test most functions, so staring at the top…

Works fine as a conventional doorbell. We have a vintage bell on a 24V AC doorbell circuit, rings it just fine.

The video camera is color, fairly high definition, and very wide angle. In our doorbell position it shoots almost the entire front porch clearly. Our porch light stays on at night so light sensitivity wasn’t tested, the night photos look fine. There is a small bug in network connectivity. The camera goes to sleep, apparently, when no motion or button press is detected. It takes some time to wake up and reconnect to your WiFi net. It MUST be connected to record, there is no local storage. That means if there is motion sensed, several critical seconds go by before anything is recorded. That’s about how much time it takes for a UPS driver to come up our stairs, drop a package, and head to the truck. I have lots of videos of UPS drivers backs as they walk away. I also have videos of our house sitter leaving after a visit, but not arriving. She was able to climb the stairs and enter the house before any recording was done. That also means someone can rush up the stairs, steal something, and head off unrecorded. This is actually the first deal-breaker.

Connecting to our WiFi net for the first time was easy, and done through the app. Security wasn’t a problem. However, as I’ll relate later, WiFi signal strength is a big deal for this little gadget. We were hitting it pretty well with our main router.

Motion detection is a problem. The Ring site claims this thing is looking at “heat signatures”, in other words, infrared. The app lets you block off general directions, and control distance and sensitivity. Or at least that’s what you think is going on. However, in my case, we have a residential street 42′ from the mounting position of the Ring. The traffic is very light, and not fast, but there is no distance and sensitivity combination that lets the Ring ignore traffic but detect people coming up our front steps. Pull back on the distance, it fails completely. Set the distance to 15′ or greater, it picks up passing traffic triggering a recording. The recording will be of a quiet street with no traffic, of course, because the Ring takes too long to wake up to get the actual thing that triggered the motion detection in the first place. Turn down sensitivity, you get no motion detection at all. So for me, the best setting was 15′ and sensitivity high. I got lots of traffic that way, but that’s the only setting that would also pick up people near the house. Foot traffic was always ignored.

The mic and speaker work well enough considering the tiny size.

The built in battery was used only for testing, as my installation had it wired to our 24VAC doorbell transformer and bell.

The app. Oh, the app. Here’s the thing: I’d like to know if there’s motion at my door. I don’t want be altered to every passing vehicle. After being constantly hammered by motion alerts, I finally gave up and turn motion alerts off. It still records when triggered, but the app doesn’t keep bothering me. The audible chime has an interesting sound, but I turned that off right away as too annoying. The app will vibrate the phone instead, and ping my smart watch. Browsing through dozens of videos in the app is tedious to say the least. Most are junk shots of my street, but every so often there’s a delivery to see. But, you have to open every single one and play it, a process that also takes more time than it should. If only there were a thumbnail taken a few seconds into the video, I could have prescreened which ones I wanted to bother burning time and cellular data opening, only to delete. I finally gave up deleting them.

Door bell rings are important. You’d like to know about that immediately, so you can open the live video stream, see the caller, and respond if you choose. I was never alerted soon enough to do that. By the time I got the alert, the mailman needing a signature was gone. Purpose defeated.

Then there’s that interesting thing in the video you’d like to look at more closely. Oops, sorry, no pinch-zoom available, you’re stuck with the whole entire wide-angle shot.

The app crashes a bit more often that most, but assuming updates are on the way, I can be patient about that.

What I’m not willing to accept is false alarms, videos taken after the event of interest has past, and slow doorbell ring response.

I’ve nearly used my free Ring server demo time, and I won’t be signing up. It’s a great idea whose time has come, but it’s not ready. The level of frustration is high. From a custom installer’s viewpoint, when Ring support tells me I might consider installing a WiFi router closer to the Ring to solve response time issues, what I hear is “Spend more money”. Sorry, I can’t be telling that to my customers. “It doesn’t work!” “Oh yeah, spend more money to make it work”. Not going to happen. If I include a new router with the Ring installation, I’m quintupling the installation cost. That’s also not going to happen. If I’m doing that, I’ll spec a real video surveillance system. Oh, and that would be without “cloud storage”, and an annual fee.

Oddly, I’m still involved with a support ticket on the Ring.  I’ve been asked to test WiFi speeds, send copies of videos of passing vehicles, and make motion sensor adjustments.  So far, no solutions offered that did any good, but apparently my WiFi speed at the Ring is questionable.  Wow, 7.2 Mbps down, 15Mbps up…how much do you need?

The other somewhat growing point…Ring doesn’t connect to other devices, it’s in it’s own world. If you’re getting into the world of Nest for example, and want to get everything networked together, Ring is the wrong product.

Sorry, Ring, you rung the wrong number this time. However, not to worry, Ring is anything but the only product in the category. I’ll be testing others shortly. Have to uninstall the Ring thing first.

4K/UHD update

I wanted to come back to a few old subjects, and we can start with Ultra HD/4K. No point in updating the status of the concept here, that’s certainly done to death elsewhere. I’ve not had hands-on UHD for almost a year, and have a few thoughts.

First, I’ve succumbed to using UHD for the term describing the new higher than 1080p resolution home video format. I used to resist and use 4K, but since UHD isn’t really 4K, that’s over now. UHD’s resolution is below DCI 4K (Digital Cinema Initiative), and by a rather inconvenient amount that doesn’t scale well, but can be cropped. So, yet again, we have the movie industry creating content at a different native aspect ratio that presented on our televisions. How did we get into this mess anyway?

We have Dr. Kerns H. Powers of SMPTE to thank/blame. Back in the 1990s when thought was being given to wider aspect ratio TVs, the question was, of course, how wide? We had US TV at 1.33:1 (4×3), which came from the film industry’s “Academy Standard” of 1.375:1, the aspect ratio of nearly every film made up until the early 1950s. The TV folks reasoned that they’d be able to use all that wonderful material on TV, so it made sense to shape at TV picture tube fairly close to that. 1.33 vs 1.37 is actually a negligible difference, but it’s key to not that there actually was a difference.

But something strange happened. TV drew the audience away from the theater, and the theater immediately saw this by reduced ticket sales. So, knowing they had the big screen in the first place, they made it bigger…wider. In the 1950s, there were a pretty big handful of different wide-screen methods, and the study of wide-screen film formats is the subject of an entire book, “Wide Screen Movies: A History and Filmography of Wide Gauge Filmmaking” by Robert E. Carr and R. M. Hayes, 1988. There are now other books as well, but this was, I believe, the first. However, all the various ratios shook out to just a few, and we ended up with 2.39:1 for anamorphic movies, and 1.85:1 for “flat”, non-anamorphic movies. 1.85:1 was simply a cropping down of 1.37, now known as full frame or full aperture. The cropping could be either “hard matte”, printed on the film that way, or cropped in the projector by the aperture plate. Many films were shot full frame, and release that way, though theaters used the cropped aperture plate anyway. The reason was, the full frame made it easy to transfer to TV!

So, by the mid 1950s we had the movie industry creating content that wouldn’t fit TV well. 2.39:1 on a 1.33 TV is a horrible mess. You either “letterbox” it and end up with far less vertical resolution, or you “pan and scan” each scene so the main action is in the TV frame. Neither works well.

So in the early 1990s, with several HDTV demonstrations already being done, it made sense that SMPTE would want to lock down the aspect ratio for the foreseeable future. But what do do in the transition? We’d now have TV content AND movie content that wouldn’t fit most TVs in the world. That was a self-limiting condition, of course, as at some point old TVs would mostly go away. But then we had the back-catalog of every TV show ever made, but now played on new TVs with a wider aspect ratio. What to do, what to do. There was clearly going to be a compromise.

The choice, it turns out was to make the new TV aspect ratio the geometric mean between the narrowest and the widest aspect ratios of 1.33:1 and (now, note this figure carefully!)2.35:1. That works out to 1.77:1. Bingo. Done. Except, it wasn’t.

That nice new perfect geometric compromise did something really weird. It made every existing format at least a bit incompatible. There was NO 1.77/16:9 material anywhere. And as far as the film boys go, there would never be. They’ve been quite happy with 1.85:1 for decades, and 2.39:1 for anamorphic, and 2.20:1 for 70mm flat, IMAX at 1:43, or 1.9, or whatever the dome was. At least, that’s where things sort of ended up by the early 1990s and later. Yet, TV was now 16:9, and nothing fits that without something being a bit wrong. Old 1.33 material is either pillar-boxed with blank side panels, or (Ugh!) stretched to fit. 1.85 films are cropped, 2.39 films are letterboxed, and HDTV original material fits perfectly. Now the moment I heard the SMPTE’s decision, I asked myself why on earth would we NOT do 1.85? We still compromise the extremes, but at least we have thousands of films that exactly fit, and new material would fit too. Frankly, I’ve always felt wider was better anyway.

So now, we have TV’s staunch and unique 16:9, and film’s staunch and not-so-unique 1.85 and 2.39 with resolutions respectfully rounded down from the actual figures to 2K and 4K. Along comes digits. The Digital Cinema Initiative chose to stay with the basics, and not change anything. But we did force everyone on earth to change to a digital TV, right? Sorta? That’s 16:9. Then we bumped from 1080p at 16:9 to UHD. Of course, we couldn’t and wouldn’t change the shape of the screen now, right? We’re “stuck” with 16:9, so UHD (disrespectfully rounded UP to 4K) is still that. SO, part of what makes DCI 4K different from consumer 4K is the aspect ratio. The other part is the dimensions of the actual pixel grid, which, according to Joe Kane, simply don’t scale well to each other. So the dimensions AND the aspect ratio difference is fixed by cropping the original DCI frame down to UHD. Cropping? That means trowing away the edges. Those would be the edges the Director of Photography and the Camera Operator pay pretty close attention to in composing each shot.

So it’s a compromise, and I think, a poor one. The choice was a geometric mean between extremes, when it probably should have weighted 1.85 and 2.39 (not 2.35) much more, un-weighted 4:3, as at some point, there would be no more of that made. Personally, at our house we watch a lot of 4:3. We love old movies, and some of our favorite SciFi is Star Trek, all shot 4:3. It’s nice to see some shows that were shot on film remastered in HD, but as much as I like Friends to fill my screen, I wonder what I’m missing. I’ll live with the 4:3 in the middle of my screen with the blank side panels, thanks. I’ll never stretch anything.

Solutions? Not really. Scaling DCI 2K and 4K to 1080p without cropping can, and is done quite well. Cropping DCI 4K to UHD on Blu-ray? Just seems wrong, the result of 25 year-old, ill-conceived compromise.

So, I’m now calling it UHD.  I’m not calling it 4K.  And I LOVE my UHD TV.  It does a lot of really nice things for 1080p content.  What little UHD I’ve thrown to it, it’s handled well.  It’s internal scaling of 1080p to 4K, especially with still images, is spectacular.  I’m still in color space Rec. 709, but that’s life.  I didn’t think I’d be saying this, but UHD TVs actually do look better than 1080p, even at a viewing distance where it shouldn’t matter.

More on that another time.

Parrot Zik 2.0 Headphones Review

parrot-zik-26

 

Where’s the Green?

A few years back I stumbled, exhausted, into the Parrot booth at CEDIA. “Not another pair of headphones…” I heard myself saying aloud. But they were missing a wire, and were Bluetooth paired to a player, so I thought I’d give them a try, then rip them off my head in disgust.

But I didn’t do any ripping. Not at all. In fact, I ended up ordering a pair, and loved many things about them. And hated a few things too. But they’ve left the building with my newly married son, so I’m Zikless once more.

Until a week ago, when I splurged just a tiny and ordered myself a new pair of Zik 2.0 cans from Parrot. Now, while I felt I should probably put on a Hawaiian shirt and play some Jimmy Buffet, I resisted (it didn’t take a whole lot), and ran them through my usual demo playlist.

So, they’ve done a few things since the 1.0. The design is very similar, the touch pad on the right ear is still cool, they come with a very audiophile-esqe set of USB and audio cables, woven cloth covered, with nice chrome connectors. The cloth bag is still too cheap to be of much use as a carrying device. The battery life is supposed to be better, not clue on that yet. And they’ve added colors.  Lots of them, 5 more besides the original black: white, blue, yellow, tan, and orange.  Didn’t see your favorite there?  Me neither, and I couldn’t see myself wearing chroma-key blue headphones anywhere in public, much less orange or any of the others, so I ordered black again.  Since then, I’ve spotted a blue pair in the wild, and they did in fact look ridiculous on the guy, who looked at mine with a bit of regret (we were in an airport).

And you’d expect there to be some sonic improvements as well. But, well, kinda.

Here’s the thing. The Zik 1.0 were darn good, a bit bass heavy, and only slightly colored. But there were too many settings, including noise cancellation on/off which changes the sound quality quite a bit, and all that room-sim processing, which is best just left off. The old app was a bit clunky too, and they’ve retained that feature this time as well. The new app is a lot different, and full of odd little quirks and bugs. The most annoying of which is that you can’t actually save any settings without an “account”, and a hot internet connection. So, if you want to tweak them for listening on an aircraft (did it), and you don’t feel like paying for the most expensive Internet access on earth or slightly above it, your settings will not only not be saved, but may not even “stick” while you’re using the headphones. Hey, Parrot…get real! Save the settings in the app, or better yet in the cans themselves, without the Internet being connected. Some of use use devices without data service.  Having the settings in the headphones would make them portable to all devices, app or not.  You do know we use these things with our laptops which are app-less, right?

The sound? Yeah, still pretty good, but different. In fact, the wonderful Audyssey Amp tuning for the 1.0 headphones is completely wrong for these. And they don’t have these profiled yet. So, they’re colored, and I really wanted to fix them. There’s quite a nice little equalizer in the app, it’s a limited parametric, just 4 bands. You can do a lot with it, if you can get it to behave. Making adjustments is a pain at best. And you can’t just edit a saved setting and tweak it, you basically start over each time. Who want’s that? I spent 15 minutes on a “tuning”, then had to start over for a minor tweak.  Dumb.

The app also locks up a lot, causing a forced-shut down, and restart of the app. The headphones pair well and easily, and the sound quality over Bluetooth is quite good (better with the cord of course). And, like the older model, passive mode is not sterling. Improved over the 1.0, but still not great.

The noise cancellation is excellent, though. It killed airplane noise just fine, and was so good at killing a hotel air-conditioner noise that I thought the thing had shut off. But that brings me to one of my bigger issues. No noise-canceling when using the headphones to make a phone call! Huh? That’s a time when noise cancellation is needed most! And it’s not available. Sheesh.  I understand the problem, you need what Telco calls “side tone”, a but of your own voice mixed into your ears for comfort, and the mic is picking up all that noise around you.  Noise cancellation inside the headphones is fairly easy, but it’s an entirely different problem to make a good noise-cancelling microphone.  But that’s what these things need.  Placing or taking a call makes all the ambient noise pop on which is jarring.  Overall the phone-call quality is just ok.

Back to audio, I’m still trying to tune them with the app and the parametric EQ, getting closer. It’s an iterative process because I have to keep a record of my settings so I know where to start from next time I adjust, and the touch-sliders don’t move well, so lacking any means to type in parameters, it’s a bit of a video-game to adjust the EQ. However, I’m liking what I hear.

Are they better than the 1.0? I think the noise cancellation is improved. I think the sound different, not worse or better, but different, and need EQ to make them neutral, just like the 1.0 did. But without Audyssey to do the dirty work, it’s all up to me now.  And, funny, the EQ changes when you turn noise cancellation on and off, which means you need to retune for both conditions.  That’s not a huge problem, but is an operational glitch than many won’t understand how to respond to.

Overall, I guess I’m a Parrot head still. That would be a card-carrying, Hawaiian shirt-wearing, guitar pickin’, Zik 2.0 on my head, Parrot head. They are not cheap headphones, until your realize what’s going on in them…then they’re the best deal on your head. Yes, I like them, and will probably love them. I’m keeping them too. Got them from Amazon at less than my wholesale source last time (which tells you why I’m not a Parrot dealer!).

Overall, 1-10, Sound is a 7, might EQ to a 8.5. Comfort is an 8 (they’re heavy-ish, and might warm up if you’re not in a freezing aircraft or hotel room). The App is a 3, needs a lot of work, especially in the EQ control action, and the need to save settings without an Internet connection. Build quality is excellent. Noise cancellation is a 9, excellent. Phone call quality is a 6, could be better, and could use a dose of that noise cancellation.

So, I’m good with them for now. And, yes, I did play the “Dead Parrot” Monty Python sketch on them, just to be a jerk.  Unfortunately, this time only, I was a jerk in private.

Update (08/01/2016)

Less than a year after purchasing the Parrot Zik 2.0 headphones, I’ve had a silly little problem with them: the earpads are coming unseamed.   A call to Parrot’s support line, and a brief chat to request new earpads, and I discover that they are not user replaceable.  However, I’m still in the warranty period, so the Zik 2.0 headphones are being exchanged.  There are no more 2.0 units available in black (I think the rep said they only have yellow!) so they’re sending out a pair of Zik 3.0.  Well, ok, cool, I guess another review is in order.

Finally getting ears on the Pono

First, let me welcome the dozens of new subsribers to this blog.  Dozens, within the last week.  And all of you have oddly similar, and oddly bogus looking email addresses.  And none of you are in the USA!  Wow! So I’m not sure what you hope to get from me, but here’s what I have for you.

Today I got my first chance to get hands and ears on a Pono High-res digital audio player, Neil Young’s little science project.  Sure, I know it’s been out a while.  Hey, I’m a busy guy, not much time to play.  Today I found myself with a bit of forced free time, so I visited a local audio shop and spotted the Pono, and got a test drive.

The caution-yellow prism-shaped device (you can get one in black too, but both look like they should be hanging on a tool belt) has an oddly solid feel, and the GUI isn’t half bad.  The shop plugged in a pair of Sennheiser Momentum headphones, and I started scrolling through the song list looking for a nice fresh high-res bit of audio to listen to.  I found a ton of Youngian masterpieces, all from the 1970s, so no joy there. “But wait”, you say, “That’s all analog!  That’s high-res stuff!”  Sorry, no. Having actually lived through that period, and worked in pro audio then, I can tell you, analog anything isn’t high-res.  There are basically only two requirements for high resolution: Better performance than 16 bit linear PCM samples, and higher performance than 48KHz sampling rate, or the equivalent in analog performance.  Analog tape of the 1970s and even up until now doesn’t hit either of those basic requirements.  So, while the files may have been 192KHz, 24 bit, the original audio wasn’t even close.

The search for a demo piece continued.  I scrolled through the song list, checking each one, until finally…wham!  I hit the end of the list.  Huh.  No actual high-res material to demo.  Odd.  So, I reasoned, I’ll just pick something more or less current, at least it won’t be analog.  But the current selections were all in the throws of the Loudness War, crushed to death by loudness audio processing.  No high-res there either.  So, I searched for the best of the best analog track, and landed on Miles Davis, “So What”.  Wow, there it was, in all it’s analog, tape-noisy glory, with Mile’s horn clearly being inter-modulated by the string bass.  Of course, at 192KHz, 24 bits, all of that distortion was clearly and accurately preserved.

The search went on for a little while longer, auditioning tracks that sounded perfectly identical to their original masters, which weren’t high res either. I finally finished listening to a favorite Neil Young track, “A Man Needs A Maid”.  Yes, analog, tape hiss at the orchestra cues, and all.  Listening to it at length I gradually experienced the full weight of Young’s song-craftsmanship, and the rather unusual production of the piece, and loved it…just about as much as I ever have.  No more, no less.

Perhaps it was the headphones.  Sennheiser Momentums, at $350, should be no slouch.  Of course, they are not uncolored either. There’s that midrange peak, and bass that’s there but a little sloppy.  Still, nothing I couldn’t hear high-res through, right?

I like the styling of the Pono far better than I thought I would.  I hate the color, they should have at least come up with a few options, or stayed with something neutral.  I love the fact that you can shove in an SD card and expand storage! If only Apple…yeah, never mind.

The real issue is one of “Why?”  What do you get for your money, less in terms of the hardware, which seems fully capable, more in terms of the entire concept of High-res audio.  Is it really high-res?  What benefit is there to a high-res copy of an old analog master?  Just this: if something was  done between the analog master and a CD or vinyl that was either not done in the transfer to High-res, or done better, then I can imagine an advantage to those files.  But not because they are high-res, rather because they were created more carefully.

High-res audio must have a traceable heritage, what Mark Waldrep rightly calls “provenance”, all the way back to the original recording method. Up-sampling doesn’t “create” high-res out of standard res digital, or anything analog.

So, we’re still at Square One.  We can’t deliver high-res unless the entire chain, mic to our ears, is high-res.  That’s still the challenge, and possibly the undoing of the entire concept.

While I like a lot about the Pono player, I’m sticking with what I have for now while I wait for the source material to catch up.

Dumping Plasma

Just saw this article, thought it was worth sharing even though I personally don’t agree with every point.

As to the “value” of UHD TV, it’s still a bit undefined, but the new and higher-end UHD sets (we have one in  our demo system) do look better than the equivalent 1080p, though the reasons are not as obvious as “more pixels”.

The primary reason to move on from even a Kuro would be cost of ownership of it vs a new TV with equal or better performance.  That’s not going to be compelling, but if you roll current used value into the picture, a new TV starts to make a lot of sense.

http://hometheaterreview.com/six-reasons-to-finally-dump-your-pioneer-kuro-tv/

Audio Power and Power Amplifiers

When we look at our home theater sound systems in reverse, we first see the room, then the speaker. If we stop there, we’ve just viewed the most difficult and important portions of our systems, the ones with the biggest influence on what we end up hearing. Setting that aside for a moment, let’s look backwards at what came before the speaker. We have wires, and amplifiers. Both are responsible for getting energy to the speaker, hopefully without changing it in any way from the signal that is input to the amplifier. And, hopefully, with enough power gain to provide adequate Sound Pressure Level (SPL) for the listener’s purposes.

So, what’s adequate SPL? In the world of THX, the goal is to reproduce sound “in the way that the creators intended”. That implies a similar sounding system to that of the system the creators used, and played at a similar volume. There is something called “Reference Level”, and if you were in a commercial cinema or a dub stage (a cinema used for the final mix of a film soundtrack), you’d have a specific reference level of 85dB SPL. That’s a level that is measured with a test noise source and an SPL meter at a particular seat or seats in the theater. The test noise level is made to produce an 85dB SPL reading on the SLP meter. That level corresponds to an average level that permits peaks up to 20dB higher to pass through the system to the listener undistorted. So, if we were to desire the same level in our homes, we’d want that same 85dB SLP reference level with 20dB of peak headroom. This, in fact, is what a THX Ultra2 system is intended to do in the typical 3000 cu/ft or greater home theater.

Looking at the speaker, it must have high enough efficiency (it’s ability to convert an electrical signal to an acoustic one) to get that high SPL into the room and to the ears without requiring unreasonable power to do so. A THX Ultra2 speaker must have at least an efficiency/sensitive of 89dB/W/m, which means with 1 watt applied, a microphone 1 meter away would show 89dB SPL in an anechoic room. That’s a minimum figure, many THX Ultra2 speakers can beat that. That also means that we can calculate the power required to hit reference level at any distance from the speaker, with some allowance for reflections in a real room.

Lets look at an example. The spectacular new M&K S300 speaker has a rated sensitivity of 93dB/W/m. So if we consider some gain from wall reflections, and sat 10 feet away, a single speaker would need a 75W amplifier to produce the reference 85dB SPL with the required 20dB of peak headroom. If we sat 15′ away, that amplifier now needs to be rated at 175W to do the same thing.

From this, two questions pop up. One, how far away will we sit? And Two, how much amplifier power will we need for “reference level listening”? These are easy calculations, though, and fairly predictable. I’ll throw in a a third question: Will you listen at reference level, or will you listen significantly below or above it? Turns out, at home, most people find reference level about twice as loud as they want to listen. That doesn’t mean they won’t turn it up to show off! But it does mean that typically they’ll use about 1/10 the power they paid to have. What’s that? Is that right? Yup, if you turn your volume knob down to -10, your volume will still sound loud, and that 170 watt amp you have will be running at 17 watts on peaks, and (now, don’t be shocked…) about .2 watts average. In fact, the actual SPL in the room may be a bit higher, because in this discussion we’re talking about a single speaker, but in our home theaters we have at least 5, and they do reinforce each other to create the total SPL in the room.

So, back to our questions, how much power will we need? Will an AVR with typical power ratings of 120W per channel be enough? In most cases, the answer is yes, if we stick with speakers with adequate sensitivity, and don’t sit too far away from them. Even if you do end up somewhat short of being capable of playing at theatrical reference, you may never actually do that, so before investing in lots of amp power, you may want to try out using just your AVR and see how you use it. With most AVRs at the upper end of the product line, external power amps are easily added later.

People often challenge the power ratings on AVRs on the bases that they aren’t tested “All Channels Driven”, and if they were, the power per channel would be substantially less that otherwise stated. All true, but it’s not all that important either. Soundtracks do not drive all channels equally. Estimates from the likes of Dolby place the highest power demands on the center channel, to the tune of 60% to 70% of the total SPL in the room, with the least amount of power distributed to the surround channels. On a peak basis, it is possible for even surround channels to demand quite a bit of power, but does that happen precisely simultaneously with other channels? The likelihood is exceedingly small, statistically nearly impossible. All Channels Driven tests are valid, but overly demanding, and do not represent real usage conditions.

However, if you do really want the potential of full power to all channels simultaneously (or you have a BIG room), you’ll be getting external power amps, which actually can provide that capability, and at substantially higher power levels. I’d be happy to consult with you to help select the best amps for your situation.

Computer Nightmares

I’m writing this on my “trusty” 15″ MacBook Pro. It’s the first thing I’m writing on it since a major repair: a new logic board. All together, my MBP has been down for over a month, and it’s my primary machine for, well, everything. See, my wife and I both have 2011 MacBook Pros, and they are identical in every way, including how they both failed. If you google a bit, you’ll find that these computers have a manufacturing defect, their AGP video “card” (really, a chip), is soldered to the logic board with lead-free solder, which degrades, cracks, grows feathery crystals that short between contacts, in all, over time most of them fail catastrophically rendering the logic board pretty much artwork.

There’s a class-action lawsuit filed against Apple about this, as repairs run between $500 and $1200, on a piece of computer hardware that cost nearly $2000 new. Yes, I’ve joined the suit. But I also fixed my own computers.

Since my wife’s MacBook failed first, I pulled her logic board out and sent it away to be rebuilt/exchanged. During the time it was away, I put her hard drive into my MacBook and she continued her work as if nothing was wrong. I, unfortunately, did not. My back-up laptop is a Windows XP machine. Yes, I still have some hair left on my head.

Just about the time her logic board came back and I got your MacBook up and running, and mine running again, mine failed too. I had it for about 3 days, then off it’s logic board went for the same repairs, and I just got it back and working. It’s been a long month!

During that time I discovered a few things worth sharing, even with those reading this with a primary interest in Home Theater. You don’t really know how much or little you have backed-up until you loose your computer completely. In my case, I ran backups using an external drive and Apple’s Time Machine. So even if my computer never ran again, I could do a complete restore to a new unit. Great, but what do I do if I don’t want a complete restore, don’t have a new computer, and just need some files? Well, that backup is useless. I pulled my main HDD out of the MacBook Pro, and mounted it up in an external drive bay, and mounted it on my Mac Pro so I could access some files, but that didn’t get me my full email because my MacPro is older, doesn’t run the current OS version, and my mail files wouldn’t import.

For email, I used my iPad, which I love/hate for many reasons. First, it’s my development platform for our Platinum Control system, so it can’t just wander off with me. Second, typing on the screen….ugh. Never do well with it, even with autocorrect. My USB keyboard is also tiny, and disables autocorrect, so other than good for quick not-taking, it’s really limited.

Then there’s applications. Never realized exactly how many I use that don’t exist on our other machines! Makes for some interesting license swapping, to say the least.

Then there’s my iTunes library. It’s about 140gig, not exactly “portable”, and while the MacBook Pro was down, it was anchored to the Mac Pro, which IS an anchor. Sorry, cloud backup for 140Gig just ain’t in my world.

My backup strategy is still developing now. I’ll still do the usual routine TimeMachine backups, but I’m also keeping a copy of other critical files, license keys, emails, etc, on another drive (formatted NTSF, by the way), and that will hopefully make access easier next time. I’m also working on automatically doing that because I’ll never remember on my own. And, finally, I’m installing critical apps and tools on more than just my laptop so I can deal with things on other hardware, and hopefully, not have to EVER use webmail again!

Next move is an upgrade. I’m doing an SSD and shared HDD in the optical drive bay, and attempting triple-boot, OSX (latest version) Win 7, and OSX 10.7.5, so I can still use older software every so often. You can’t triple boot with Bootcamp alone, so this should be interesting!

Wireless HD Audio streaming: 24/192 over WiFi!

Every so often I trip over a really cool product to add to our line, and today was one of those trips.

It’s called Voco. It streams audio. But that’s just the beginning, it does so much more! For example, the holy grail of home audio streaming has been HD audio, content at 24/96 or higher. The Apple solutions we’ve been using are great, but down-sample to 16/44.1 or 16/48. Many other solutions are around, many are kind of half-baked, or not flexible. Many of our Denon AVRs, for example, can look out at a DLNA sever and grab 24/96 files and play them…but they’ll be down-sampled to 24/48 unless you turn Audyssey off, a choice I’m personally not prepared to make.

Today I found Voco. Not only will their products stream audio files from pretty much any sort of library, from iTunes, to DLNA, to a NAS device, even in the cloud, but one of their product can happily play HD audio files up to 24/192KHz! Now that may not be a first-ever, but what’s great is it’s something with a real front end, a user interface that works, is easy to use, and integrates very well.  And Internet Radio, Pandora, even YouTube (video too!) is supported.  You can navigate the system with an app that includes voice commands, and each Voco device can become a WiFi hot spot, filling out your home WiFi coverage, if necessary.

I’ll be writing more about Voco soon, but for now, it’s a first step to a wireless, distributed high resolution audio system with centralized storage. And they’ve thrown a lot of extra functionality in to boot.  How great is that?

I love to say “Yes, we can!”  We’ll be installing Voco systems soon.