In 2009, Jerome Saltzer revisited his original paper with M. Frans Kaashoek in their book Principles of Computer System Design. The book was more focused on general computing but addressed secure design in depth as well. They reiterated the original principles from Saltzer’s 1975 paper, and added a few too, bringing the list into the 21st century.
Minimize secrets. We’ve been using modern encryption techniques for the last 50 years. Key to modern encryption is the concept of a, well, key (not to be too redundant about it). Known as Kerckhoffs's principle, it basically states that encrypted information should only depend on a key for decryption. So even if I know the algorithm you used to protect your credit card number on your computer, I can’t crack it unless I have the key you used to encipher it. If a system doesn’t follow Kerckhoff’s principle, it isn’t encryption, it’s just obfuscation. This helps to minimize the number of secrets you maintain - now, as opposed to hiding all the data, you just need to hide the keys. As an extension to this, the smaller the amount of information you need to protect, the better. Encryption does a great job, when correctly implemented, at protecting data. But the less data you need to protect, the better.
Adopt sweeping simplifications. Complexity is hard to understand. We human-brand animals are pretty proud of what we've been able to do with our brains over the past few thousand years, but we’re really not as smart as we think we are. The more complex something is, the more likely it is to contain errors. If that something is a computer system, those errors are bugs, and may be able to be exploited. Not only is simpler code easier to maintain at a specific level of quality, it’s also easier to fix, and easier to review. So not only are you more likely to get it right the first time, if you don’t, your mistakes are easier to find and fix.
Least astonishment. Don’t surprise your users! It’s important that the secure thing to do is the easy thing to do. Too many systems are built with the intention that users will do things the secure way because they’re supposed to, not because it’s the default choice. Some users will always do things the easy way, because they see that as most efficient, and let’s face it, we’re all busy, I can’t blame them. But if the easy way to do something isn’t a secure way to do it, I can blame you. And your boss should too.
Design for iteration. Ahh, iterative development! I bet you would not have thought of it as a security principle. Unit testing, short release cycles, things like that don’t seem to be security features. But they are! If you find a flaw in a system you’ve released, the longer that flaw is in production, the more vulnerable you are. If you have processes in place that are efficient, effective, and help you deliver quality code quickly, you can patch vulnerabilities quickly too. And that leads to better overall system security.
Be explicit. Don’t make assumptions about the data you’re dealing with, or where you are in a protocol or execution process. You need to be able to explicitly validate any and all assumptions you might have. Don’t assume a specific execution context - someone will see this assumption, and take advantage of it. Developers make this kind of mistake all the time, and usually, we can get away with it. But if we make this mistake at the wrong place, it’ll be exploited. Better to not make these kinds of assumptions at all.
We’re getting close to the end! After this, we have two more sources for secure design principles, a boot from Richard Smith, and guidance from the IEEE. More on these next time.