Over a million developers have joined DZone.

Performance Fundamentals: The Megahertz Myth

Clock rate isn't the most important factor in determining CPU performance. Learn about the megahertz myth: its origins, what it is, and what really matters.

· Performance Zone

See Gartner’s latest research on the application performance monitoring landscape and how APM suites are becoming more and more critical to the business, brought to you in partnership with AppDynamics.

I recently had a chat with some co-workers about processors, and naturally clock speed came up. We were discussing the Apple II vs. IBM and PowerPC CPU. “Have you heard about the megahertz myth?” I asked. No’s all around. From my experience, this interesting debate seems to fly under the radar. So let’s explore the megahertz myth, and its history.

Quite simply, the megahertz myth addresses the fallacy that higher clock speed translates to better performance. In reality, this picture isn’t quite accurate. When assessing CPU performance, certain contributing factors outweigh clock rate. For instance, instruction sets and pipeline depth are valuable in determining performance. However, before you throw clock speed out the window, as it’s a valuable metric for comparing CPUs from the same family.

The origins of the myth came about with the competing Apple II and IBM computers, both running PowerPC CPUs. PowerPC CPUs exhibited varying performance yields, despite having the same clock speed. The x86 architecture, with longer pipelines, actually aided CPUs to reach higher frequencies. The Apple variant was significantly slower than the IBM iteration.

To the uninformed consumer, clock speed is the ultimate metric when comparing CPUs. Yet, various factors mean that, say, a 2 GHz CPU can outperform a 2.6 GHz processor. Microarchitecture plays a key role, as does pipeline, and even program design. Thus, clock rate isn't nearly as reliable an indicataor of potential performance. Benchmarks instead provide better insight when comparing CPUs. 

Interestingly, GPUs have comprable myths. Notably, there's a frame rate buffer myth. With video memory, 4 GB is not necessarily greater than 2 GB. While double the memory sounds awesome,  there are other aspects such as bandwidth.  A 4 GB GPU eqiupped with GDDR3 memory will perform worse than a 2 GB GPU with GDDR5. 

Thankfully, there are loads of awesome sites like cpubenchmark.net and anandtech.com that offer detailed benchmarks so you don't fall prey to the megahertz myth.

Stay tuned for my next performance fundamentals post, and leave a comment or hit me up on Twitter if there's a topic you'd like me to cover!

The Performance Zone is brought to you in partnership with AppDynamics.  See Gartner’s latest research on the application performance monitoring landscape and how APM suites are becoming more and more critical to the business.

performance,cpu,gpu,benchmarking,megahertz myth

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}