Predictably, when Samsung first announced the flagship Galaxy S5 smartphone earlier this year, there were the inevitable comparisons between the Samsung and the iPhone 5s. This was the ultimate, inevitable battle to the death or whatever. So how did Apple stack up with the latest contender?
Well, on the surface, probably not so well based on specs alone. Starting with a 5.1-inch display, 1.1 inches larger than the one on the iPhone, the Samsung had a processor with a higher clock speed, more cores, and a camera with more megapixels. Onboard RAM was also higher, but I won’t bore you with the raw details.
You see, at the end of the day, based on surface specs alone, the Galaxy S5 must be far superior to the 5s. But was that true?
It turns out, of course, that raw specs don’t count. It’s a well known fact that iOS is more efficient than Android, even though smartphones with the latter may deliver roughly comparable performance. But it requires the brute force technique to make the hardware more powerful to compensate; one of the hallmarks of the next Android OS, known as “L” is more efficiency, thus promising superior performance. We’ll see.
Besides. Apple builds its own customized chips, though still based on the ARM architecture, and thus specs don’t really count in the real world. It’s all about how well the products perform when running regular apps.
So Apple still manages to stay at or near the top of the heap with real comparisons. But since the specs seem to tell a different tale, and favor the competition, you’ll see all these match-ups without context. When the expected iPhone 6 arrives, perhaps in less than three months, you’ll see how it stacks up against the Galaxy S5 and whatever newer hardware Samsung and other handset makers devise by then. But until actual benchmarks and usability tests are run, it won’t mean very much except for meaningless bragging rights.
Of course this is nothing new. Consider all those Mac versus PC comparisons of old. These days, it’s more about price than specs, because Macs and Windows PCs are basically using mostly the same hardware nowadays, and the software bundle doesn’t get a lot of play.
But before Apple switched to Intel in 2006, you had all those inevitable PowerPC versus Intel comparisons. Intel almost always boasted higher clock speeds, particularly on those hot-running Pentiums of old. So could a 3GHz processor really be slower than a PowerPC with a rating one third that high?
Apple said yes, and would produce benchmarks, usually with multiplatform apps, such as Adobe Photoshop, to demonstrate how the PowerPC chip was really superior. You can bet that Windows fans created the myth that Apple cooked the books and that the tests were fake.
While you can argue that Apple deliberately chose tests that would exploit the capabilities of their hardware, there was nothing strange about the test methodology. At the time, they revealed all to the media, along with canned test files showing how specific Photoshop filters ran. Indeed, when I ran the test under the fairly normal circumstances Apple posed at the time, I was able to essentially match their results.
When I published those tests, you can bet that PC fans piled the criticisms upon me. But I never made an effort to discuss the issue of whether these seemingly normal tests were somehow suspect. They didn’t seem to be, but it was also true that Intel continued to overcome the inefficiencies of their chips, while the PowerPC stagnated. This explains why Steve Jobs decided that Apple must go Intel, and the transition was smooth. It was even smoother than the original migration from the old Motorola 680×0 chip to PowerPC, and I was around in those days as well.
Forgetting the numbers and how they were calculated, the main issue was that the other side invariably accused Apple of cheating. They wouldn’t admit that Microsoft ever cheated — and they did as history demonstrates — only that Apple could never possibly claim equality or superiority. The Mac was overpriced an underpowered, and thus any claim to the contrary must be a lie. Facts didn’t matter!
Of course, benchmarks are common in many industries. For cars, you want to know that your vehicle can go from zero to 60 miles per hour (or the metric system equivalent) faster than your neighbor’s. It doesn’t matter that the published tests are done with professional drivers who may run the test over and over again to trick the vehicle into accelerating a little faster. At the end of the day, if the pickup is good, particularly when going up a mountainous road at normal driving speeds, it doesn’t matter. The car that accelerates to 60 in eight seconds is only a whisker faster than the one that accomplishes the same feat in seven seconds — or even six. As fast as you can say one-one-thousandth once or twice, so does it really matter?
It’s a reason why I opted for a standard engine rather than the less-reliable turbo engine last time I bought a car. The difference didn’t make enough of a difference to matter, and the somewhat superior gas economy with the less powerful engine was a real plus.
Yes, specs can matter if there’s a drastic, clearly defined real world difference. Otherwise, it’s mostly idle conversation.