First and foremost, when I write about how people test tech gear, I speak from experience. I started in the game in 1994 when, as the author of my first computer book, I got an email from a Macworld editor. For several years thereafter, I wrote for that magazine before moving elsewhere. Nowadays, most of my reviews appear in these columns and on The Tech Night Owl LIVE.
Over the years, I’ve reviewed everything from one of those early digital music players — all very bad until the iPod arrived — to smartphones, audio gear including computer speakers, TV sets, quite a few personal computers (Mac and PC), tablets and printers. As I write this column, I have an iPad Air 2 and an iMac 5K in house for hands-on evaluation.
So I think I have a decent grasp on the process. Since I’ve worked for a number of publications, I have seen different approaches. One editor kept egging me on to find fault in products that didn’t have significant faults, just to enlarge the “Cons” list. Without naming names, that editor has since gone on to blog at a mainstream newspaper, where you’ll see misleading and phony claims about possible product shortcomings. That’s not a game I will ever play, which is why I stopped working for that person after a few months. It didn’t help my bottom line, though there were other editors who were willing to allow me to play fair.
Unfortunately, there are too many lazy reviewers out there, and they do a disservice to the reader. Some simply paraphrase the reviewer’s notes supplied by a manufacturer in evaluating the product or service. Sure, those notes are useful as background information, but a reviewer is supposed to serve the interests of the reader not the company, and thus they should follow their own — or their publisher’s — guidance in writing that review.
So, yes, I do read the manuals and the product notes, but prefer to do my own thing. I hope you readers accept my approach. What I am trying to do is put myself in the position of the typical user, so I can get a grasp as to how well the product or service operates in normal use.
Take that iMac 5K that I received from Apple last week. The first thing I did ahead of hooking it up was to back up the data on my work Mac, a late 2009 27-inch iMac. I used Bombich’s Carbon Copy Cloner to make a clone backup on an external FireWire 800 drive. You see, I wanted to see what Apple has accomplished in five years and how it impacts my workflow.
In passing, the iMac 5K, and other recent iMacs, ditched FireWire and went to Thunderbolt. So Apple supplied a Thunderbolt to FireWire 800 converter plug to allow me to attach my backup drives. I used Yosemite’s Migration Assistant to bring it all over, as explained in a feature article I wrote for last weekend’s issue of our newsletter. In reviewing the iPad Air 2, I set it up with a backup of my wife’s iPad 3. After brief testing, I put the unit into her hands for an extended hands-on.
Now the very laziest reviewers often don’t bother to actually live with a product beyond the brief hands-on. They will post articles touting a comparison with an existing, presumably competing, product, but do little more than list specs and make assumptions as to what they mean in actual use. That is the reason why smartphones and tablets sporting processors with higher clock rates and more RAM are considered to be superior to the iPhone and iPad. It doesn’t matter that actual benchmarks show Apple to do as well or better, with a snapper interface. It’s all about the specs.
Sometimes product comparisons don’t even make much sense. Take an article I read this week that pitted an iPhone 6, featuring a 4.7-inch display, with the Google Nexus 6, which has a 6-inch display.
Right away, the article has problems. If the reviewer was interested in comparing mostly similar gear, why not the Google Nexus 5, which has a 5-inch display? Or perhaps choose an iPhone 6 Plus, Apple’s first foray into the world of phablets. That would have made far more sense.
Now it does appear the reviewer did perform genuine hands-on comparisons, and came up with conclusions that appear reasonable, whether you agree with them or not, while still ignoring the fundamental disconnect in the article. It would be roughly the equivalent of comparing a Microsoft Surface 3 with a MacBook Pro with Retina display. Microsoft actually compares its tablet — essentially a convertible PC note-book — with the MacBook Air. Both are slim with displays that are close in size. The MacBook Air comes with 11-inch and 13-inch displays. The Surface 3 is 12-inch. See? Close.
But the physical keyboard on a Surface 3 is an attachable accessory. The MacBook Air is a standard clamshell note-book with the keyboard included. So this sort of comparison, which Microsoft touts heavily with costly advertising, is really not very accurate.
The long and short of it is I think I have valid reasons to claim that my approach to reviews, admittedly imperfect, makes far more sense than the approaches taken by some other tech writers.
Print This Article