The feeding frenzy over Apple's preliminary G5 benchmark results (pdf) has been pretty interesting to watch. The number of web sites slamming the G5 and the Apple funded Spec scores (AMD Zone, ExtremeTech, Haxial, Overclockers, The Register, Slashdot) has shown how set people are that Apple can't possibly have machines that equal or exceed Intel or AMD based systems. The strong bias of many of these sites comes clear when you look at several points that are ignored or glossed over.
To begin, Apple's choice of software to compare the processors with. Apple chose to use the same application to benchmark all of the machines, the GCC 3.3 compiler. I watched Apple's keynote address last night and after reading the articles this morning I was struck by some obvious facts that had been overlooked or purposely ignored. If you fast forwards 59 minutes into the presentation you'll hear Jobs announce the new Panther development environment, Xcode, which is based on the GCC 3.3 compiler.
Jobs says: "We're using the new GCC 3.3 compiler, the latest and the greatest... Remember, Jaguar development tools used to be 10 times slower than the gold standard of today, which is Code Warrior. We were 10 times slower, so to build the Finder UI in Xcode took 377 seconds, and to build it in Code Warrior took 223 (seconds). They are still faster than us, but we've reduced it to under 2 to 1."
As you can see from this capture from the video feed, GCC is quite clearly much slower than Code Warrior at compiling on a Macintosh. (And yes, they are comparing them on a dual processor G4 and not a G5, but odds are pretty good that the performance difference will remain on the G5 as well.) In other words, Apple didn't even use the fastest cross-platform compiler available on the Macintosh for their test. If the performance difference is the same on the G5 as on the G4, Apple's Spec scores would have been around 60% faster, wiping the floor of the "official" PC benchmarks. Think about that for a moment.
Why would Apple do this? Well, a couple of points come to mind. First off, odds are pretty good that Apple's secrecy was a big factor. I'm guessing that Apple hasn't been sending a lot of prototype G5s out since they were top secret and probably not readily available to send to other companies. Secondly, Apple has a pretty consistent testing methodology between platforms. As far as I've seen, they do their best to use applications that are available on both platforms to illustrate their speed comparisons. Just imagine how maligned Apple would be if they compared Photoshop on the Mac to Microsoft Paint on the PC.
Apple's tests were designed to be taken as a performance comparison of a well-established heavily optimized Intel friendly program that has been ported to the PowerPC. Don't tell me that the Linux and Gnu hackers haven't spent years tweaking GCC to perform better on Intel architectures; I've used Linux long enough to watch it improve dramatically over time. One of the common problems with ported UNIX apps on the Macintosh right now is the lack of PowerPC optimization for the majority of them.
What people are purposefully ignoring is the fact that the tests were application specific. If you believe that Apple crippled the GCC compiler on the test Intel boxes, then go out and recreate the test with the options turned on that you believe will radically alter the results. Of course, any optimizations and cheats performed by the PC manufacturer's are perfectly valid and should be ignored... I'm sure if Apple had built a completely custom compiler from the ground up for the Spec benchmark's we would hear how wrong that would be too, or if they hadn't publicly detailed their testing methodology. Apple just can't win in this case.
Then there were the other performance comparisons that were made at the keynote. Apple did one of their infamous Photoshop bakeoffs, which ran 2.1 times as fast as the dual Xeon PC. Luxology's (the programmers of Lightwave 3D) mysterious new 3D application was demoed, and it ran around 2.3 times as fast on the G5. The Mathematica demo also ran 2.3 times faster. The sound demo (which was an apple -vs- oranges comparison since they were comparing eMagic to Cubase) was even more devastating.
Most of those articles completely ignored these demos, refusing to even mention the scores or just dismissing them out of hand. Yes, Apple hasn't published the details of those demos. But the performance shown was pretty amazing when you consider the fact that they were running rough beta third-party software on a beta operating system on a prototype unit. Finished hardware/software tends to get faster, not slower.
Should Apple's tests be taken with a grain of salt? Of course, they want to put the best spin on the new machines. I am quite interested to see how well the machines will perform on truly independent real world tests using everyday software applications that have been properly optimized for the G5. Until then we will just have to wait to see who was right.