I recently had a chat with some co-workers about processors, and naturally clock speed came up. We were discussing the Apple II vs. IBM and PowerPC CPU. “Have you heard about the megahertz myth?” I asked. No’s all around. From my experience, this interesting debate seems to fly under the radar. So let’s explore the megahertz myth, and its history.
Quite simply, the megahertz myth addresses the fallacy that higher clock speed translates to better performance. In reality, this picture isn’t quite accurate. When assessing CPU performance, certain contributing factors outweigh clock rate. For instance, instruction sets and pipeline depth are valuable in determining performance. However, before you throw clock speed out the window, as it’s a valuable metric for comparing CPUs from the same family.
The origins of the myth came about with the competing Apple II and IBM computers, both running PowerPC CPUs. PowerPC CPUs exhibited varying performance yields, despite having the same clock speed. The x86 architecture, with longer pipelines, actually aided CPUs to reach higher frequencies. The Apple variant was significantly slower than the IBM iteration.
To the uninformed consumer, clock speed is the ultimate metric when comparing CPUs. Yet, various factors mean that, say, a 2 GHz CPU can outperform a 2.6 GHz processor. Microarchitecture plays a key role, as does pipeline, and even program design. Thus, clock rate isn't nearly as reliable an indicataor of potential performance. Benchmarks instead provide better insight when comparing CPUs.
Interestingly, GPUs have comprable myths. Notably, there's a frame rate buffer myth. With video memory, 4 GB is not necessarily greater than 2 GB. While double the memory sounds awesome, there are other aspects such as bandwidth. A 4 GB GPU eqiupped with GDDR3 memory will perform worse than a 2 GB GPU with GDDR5.