Wednesday, November 7, 2012

The unspoken pitfalls of DMM reviews

This is just some ramblings on equipment reviews.

Personally, I love EEVblog.  If you have not already done so, be sure to go over to EEVblog.com and check out some of Dave's his ratings, rants and reviews.  Just listening to him is enough to make me say "hell, yeah!".  I wish I could do what he does.

One of Dave's many services to the electronics community are honest, no-BS reviews of digital multimeters and other equipment.  He's amazingly thorough and calls them like he seems them, including teardowns to reveal the inner nastiness of some of the units.  He's managed to blow several up by just using them.

It's an invaluable service, but not without pitfalls.  Not knocking Dave here at all - I can't think of anyone I'd rather have telling me which equipment is good and which is crap.  But the whole review methodology is limited by design.

Here's the deal.

Variation is the thing.  Reviewers - any reviewer - can only review one (or, in rare cases, two) example of any given piece of equipment.  This does not accurately show variations within the entire lineup of units.

It is certainly unreasonable to expect more - both reviewers and those that donate the equipment for review have limited resources.  Testing a statistically representative sample is pretty much out of the question.  What you get is a spot check - a bright light on one individual piece of apparatus.

Now, that is certainly better than nothing.  Dave himself finds an amazing number of things from looking at only a single unit, including cases where PCB boards have been hacked up and messy jumper wires soldered in to make things work.  Knowing this sort of information helps provide a view of the manufacturers QC and standards so you know if they're deserving of your hard-earned money or just a junk peddler.

But a primary goal of these reviews is to determine the accuracy of the unit being reviewed.  Let's face it - no one wants to spend $1,000 when $55 will do, but no one wants to spend so much as $1 on an inaccurate piece of crap you can't rely on.

This is one of the primary reasons people spend thousands of dollars on premium equipment - having confidence that what is says is true.  You can get a DMM with all the bells and whistles possible for around $200, but some people would rather pay $1,600 for a high-end meter.  It's not the features - it's the confidence.

But consider the possible scenarios.  (I'll use a fictional reviewer named Tom, but it could be anybody.)

If a high-end piece of equipment is reviewed, variation is low, accuracy is high.  It gets stellar reviews for accuracy, with deservedly more nit-picking given to usability and durability.  People buy it, and are happy.  So far, so good.

But what of lower-end equipment?  Variation is higher, but we can't readily check it.  So what do we get?

Scenario 1 is that Tom gets a really good example of the breed.  Worst-case, the manufacturer has cherry-picked a perfect example to skew the results.  This cheap little meter (say, $99) gets stellar reviews.

Based on this, people buy it.  It's a great value, and the odd one that is perhaps not perfect for accuracy is overlooked.  But if the unit-to-unit variation is high, it may not be the best selection for some people, or the units may not perform consistently.  So the review makes the units look better than they actually are.

Scenario 2 is that Dave manages to score a poor unit.  This unit is the one-in-a-million unit that has relatively crappy accuracy - representative of the worst-case situation.  Dave pounds it and declares it to be crap.  Result:  nobody buys it.

The problem here is that this one unit is not really representative of all the units.  It is worse.  But you can't tell that from a single-unit review.  From this, people end up overlooking what may be a great value because of the poor performance of a single unit.  The review makes the unit look worse than they actually are, even though the average unit in the series might be quite good.

Neither scenario is particularly desirable, but both are possible.  To correct this you would have to test a number of units from different batches or factories over time, which is not going to happen.

So what do we do?  Well, just remember that a single-unit review is not the be-all and end-all.  Take a critical look at your application to see how much accuracy you really need.  Then study the DMM specs, hard, to see what you are getting in the worst-case.

Consider how many you want and need.  Take a good look at your budget - is it better to buy five inexpensive meters and toss one that is not accurate enough, or buy one really good one?  Can they be calibrated, and for how much?  Do you have reference equipment and the time to test the meters out for accuracy?

I personally am a huge fan of inexpensive meters.  I would rather have ten cheap meters on my bench than one expensive one - I often use four or five at a time, and have different ones with specific features for different jobs.  You can get a "good" datalogging meter for around $60, or 10x-20x less than the high-end version.

But what I want is not necessarily what you want.  If you can only carry one meter, and it has to work, because the nearest repair shop is 9 hours away from site and is closed anyway, and it's -35*C out and you may drop it and it just HAS to work - well, that is different.

Note that the caveats listed here do not apply to every review, but will apply for many.  In software, for example, the features will not vary but the ability to install and run on different platforms might vary quite a bit.  (Android has a lot of issues with this.)  Durability may vary; build quality and parts can change anytime.  And there is always the possibility of getting a lemon.

Variation sucks.

No comments:

Post a Comment