I’m very surprised that so few members of the tech media recognize the serious flaws in the testing methodology of Consumer Reports. Yes, I realize that the magazine appears incorruptible, because they buy all the products they test, do not accept advertising, and are run by a non-profit organization.
That, however, doesn’t mean that they have a clue about many of the consumer electronics products they test. Just recently, the iPhone 4 failed to receive a “buy” recommendation because of that alleged “death grip” problem, even though other smartphones suffer from the same limitations. And this decision came despite the fact that the iPhone 4 otherwise rated ahead of the pack.
Now any responsible testing source would have examined the well-known real world methods to duplicate these signal loss symptoms. But CR devised a wacky laboratory test that only caught the iPhone 4 in its net.
Pathetic? You bet, but yet here we go again, as history repeats itself.
So the news came forth this week that Consumer Reports had, as usual, given Macs high marks in the most recent ratings of personal computers. This report comes on the heels of the rave reviews granted to Apple’s revitalized MacBook Air by most of the traditional tech media.
So you learned that the 11.6-inch MacBook Air rated tops between the two tested models with similar screen sizes. When it comes to 13-inch models, the larger Air got the same rating. The more traditional Toshiba Portage R7056-P35 received high ratings as a budget-priced model. The regular MacBook, costing $1,000, or $220 above the price of the Toshiba CR reviewed, didn’t quality as “budget,” although it ties with the Air as the cheapest Mac note-book.
Of course, if you optioned a 13-inch Toshiba to match a MacBook, the price disparity might be far less, but CR fails to grasp such not-so-fine distinctions.
I could go on. In the standard-price categories, the 15-inch and 17-inch MacBook Pros led the pack, as did the Mac mini and 27-inch iMac.
CR’s overall scores are irrelevant and I won’t repeat them here. The real problem is that the magazine’s testers haven’t a clue about the difference between the a Mac and a Windows PC. Or maybe they do have a clue, but prefer to dumb down their tech reviews to the point where they are utterly useless.
Indeed, their choice of models to test is also inscrutable. A PC box assembler may have loads of models that fit a particular category, with plenty of checkboxes with which to customize, but CR clearly opts for a simple cross-section. With Macs, there are only a handful of models, so they test them all, with the exception of the Mac Pro, which is rarely covered.
The vast majority of CR’s readership is probably not interested in a high-end computer workstation anyway.
I suppose it’s possible that CR believes most of their readership has already made the decision whether to embrace the Mac or the PC, so the magazine’s tech editors don’t choose to address the subject. But it’s also true that a large portion of Macs are sold to Windows switchers. The total at Apple’s retail outlets is usually said to be over 50%.
If CR is assuming that PC platform decisions have already been made, they are dead wrong. So they should be making at least a token effort to define the differences between Mac OS X and Windows. They are certainly capable of staging usability studies to see which OS offers a superior working environment, but they don’t.
They obviously have the budget to buy dozens of computers, but they don’t seem to be motivated to seeding a few dozen readers who are computer novices, offering minimal instruction, or maybe a few “Dummies” books, and letting them adapt. Or maybe CR would prefer to have a canned computer-based modeling system perform the task. Certainly, they couldn’t figure out a way to correctly test the iPhone 4 and other smartphones to see if there were antenna deficiencies or defects, so I suppose they wouldn’t know how to judge one computer operating system over another.
Then again, they also declared the iPad inferior when it came to gaming performance compared to the cheapest PC note-book with integrated graphics. So maybe I’m better off just calling them incompetent.
For those who have wondered whether I have contacted CR to discuss their test methodology, let me say that I have tried to get them on The Tech Night Owl LIVE, and they haven’t graced me with a response. Maybe they know who I am, and my approach to the subject, so they’d rather not try to defend the indefensible.
Or maybe they would prefer a mainstream environment where they will only face softball questions. I don’t presume to have the final answer.
I continue to be concerned, however, as to why most of the rest of the tech media continues to give CR a pass. It’s clear to me their test methods are extremely faulty, and they need to be forced to account for their shortcomings.