Spectre and Meltdown Disasters
Spectre and Meltdown Disasters
In this post, a cybersecurity expert discusses the impact that the Spectre and Meltdown bugs can have on your machine, and what he hopes the industry will learn.
Join the DZone community and get the full member experience.Join For Free
The Spectre and Meltdown bugs are disasters. I'm usually all for the rapid release of security flaws to force vendors to quickly patch. I really wish we had a few more years for these two.
Most of the time, you can listen to the various pundits out there and forge a middle ground that makes sense. On one side, you'll have those that insist the sky is falling (or Skylake is falling? Well, it's performance sure is at least). On the other, those that offer more comforting words. Usually, things aren't as bad as the doomsayers claim, nor as good as the fluffy-cloud types assure us. Things are usually closer to the claims of the fluffy folks.
Not so this time.
Both of these bugs take advantage of two things - speculative and out-of-order execution and read-only kernel code in user space. Both of these features were designed into processors as significant performance-enhancing features and allowed processor performance to continue to creep forward aligned with Moore's Law. Removing these two features will set processor performance back years.
Basically, these bugs allow developers to build very specific sections of code that then speculatively execute or execute code out-of-order. These performance enhancements exist because, most of the time, we have extra processor cycles we can take advantage of while we're blocked waiting for I/O or something similar. If we use those processor cycles to execute instructions that we might need later, and then we end up actually needing them, we see significant performance speedup. If we execute those instructions and don't need them after all, our performance is just as a bad as it would be without speculative or out-of-order execution. So, really, this enhancement gives us significant performance increases with no performance downside.
Likewise, mapping kernel pages into user space in executing programs allows us to access that data when we're executing systems calls, for example, or interprocess communication, without incurring a significant performance hit. Otherwise, we'd need to copy that information into the main memory from somewhere, and we'd need to write it somewhere when we're finished with it. Now, you need to have the appropriate permissions to access this kernel memory, but nevertheless, it's still stored in user space in running processes. That's right, in every running process.
So now, we execute instructions out-of-order, speculatively, and we have kernel information in user space. Finally, and this is key, processors use caches. I know, big shock. Now when a branch is not taken or code isn't executed, the results of executing that code speculatively are just not used. Not deleted, but not used. So that information is still around, and still readable. And it still exists in cache - cache which can be read.
When you put these together, you're able to extract protected information via instructions that aren't officially executed (but which are executed speculatively, or out-of-order) and the data cached from those not-officially-executed lines of code.
These are pretty fundamental optimizations, and figuring out how to most effectively work around their flaws will take some time. You can bet that the first patches won't be very good, but they'll improve as we come up with better ways to implement them. And new chip designs won't hurt either, but that's a few years out for most of us.
Opinions expressed by DZone contributors are their own.