Secure Design Principles: First Steps
Overview of the classic ten principles of infosec from the prescient Saltzer & Schroeder (1975) paper.
Join the DZone community and get the full member experience.
Join For FreeIt’s always nice to have a group of concepts you can follow to help you do, well, anything. Don’t cross the street on red. Beer before liquor, never sicker (at least I got something out of six years of undergraduate work). Pay your taxes on time. All very helpful to guide to lifelong prosperity and health, and to help you avoid things like prison. Cybersecurity also has a group of principles engineers should follow. Some of these you’ve probably heard of - the principle of Least Privilege, for example. Others, like the principle of Open Design, you’ve probably heard of, but haven’t really thought of as a security principle. Still others you’ve probably never heard of at all.
Scientists and engineers have been trying to collect and categorize these principles for the past 40 years. I know, it’s a really long time, and still, most developers haven’t heard of many of the principles that have been developed at all. There’s a variety of reasons for this - churn in software development is relatively high, and new engineers and developers are constantly entering the field. Also, only recently have computer science and engineering curriculums started to address security at all. And, let’s face it, while developers do get fired for not delivering on time, if they ship with security issues, they’re usually on to the next position before those problems are discovered and exploited.
So, we have this large group of principles, but nobody really knows about them. Let’s change that.
Jerome Saltzer and Michael Schroeder wrote the first paper in the area, the one that established the idea of cyber-security principles (before cyber-security was even a thing). The Protection of Information in Computer Systems was a visionary paper, discussing discretionary v. mandatory access control, virtual machine management, and the attractiveness of sensitive information - all in 1975. That was a very interesting time in computer engineering as well, in that many of the things we completely take for granted today weren’t even ideas then. The layering we’re accustomed to in almost all systems we design applications for wasn’t even on the table in 1975, so papers from that era have a really odd character of addressing low level system issues (like memory segmentation) as application level problems. Really glad we don’t have to do that kind of thing, personally.
Economy of Mechanism. The first principle, economy of mechanism, addresses overall system complexity. In essence, a system practicing economy of mechanism includes functions that the system must provide, but no more. It also implies that the provided functions must be as simple as possible while maintaining product relevance. Essentially, this principle pays homage to the engineering maxim to keep things simple.
You provide only the functionality that the system needs to do it’s job, and no more, and that functionality is kept as simple as possible. Basically, you follow the old engineering maxim to keep it stupid simple (or simple, stupid if you’re having a particularly bad day). We as app developers break this maxim every day, either by keeping dead code in system, or by creating over-complicated designs, or maintaining backward compatibility. Attackers actively search for these kinds of flaws - web app reconnaissance regularly includes looking for unused pages, or old authentication hooks. Over-commented code is a big get too - especially comments that describe how a system is supposed to work in some detail. And of those comments describe security specific systems so much the better (for the attacker at least). Old software is particularly fruitful to attack as well, and regularly lurks in corners of operating systems or large software applications. Commenting code is important, and you don’t want to arbitrarily remove old code nor remove backward compatibility if you really need it. Nevertheless, you need to be disciplined about what you actually release to customers. Keep that old code, but keep it unreleased. Maintain backward compatibility, but don’t let entropy set in.
Fail-safe Defaults. Systems should fail to a safe state as a general principle. Applications general don’t fail monolithically, but if they do, they should fail to a secure but functional configuration. This kind of thing becomes much more important as the application software becomes higher-stakes. Control system software is particularly important. In these cases, the systems need to fail to a secure state while maintaining the managed physical systems in a safe state. If an application fails to a state where it’ll be vulnerable, where it will, say, allow unauthenticated access or load unvalidated configuration information, attackers will do their best to get that application to fail to make it susceptible to compromise.
Complete Mediation. Complete mediation implies multi-layer authentication checking. At each object or service access, systems check user supplied credentials to ensure authorized access. True, completely mediated systems are very rare. Many in-house systems will only authenticate a user once, and then use application-specific credentials for database access. This is wrong in so many ways, not the least of which is that it completely violates this particular principle.
You don’t need to mediate every interaction, but you should at least authenticate the actual user, not the application, at different layers. Does your app access a Thrift endpoint after the user has authenticated? well, authenticate the user again at that Thrift endpoint. Does that Thrift endpoint then access a MySQL instance? authenticate the user there too. And make sure you log the access as well (more on this later).
Open Design. Modern cryptography made a transition from being, bascially, obfuscation to modern encryption with specific mathematically proven properties with Kerckhoff’s principle. Kerchoff’s principle claims that strong encryption should depend on the selection of a strong key rather than some closed algorithm. In this model, the actual algorithm can be completely open and known by all, but as long as the key is protected, and strong, any encrypted information will be safe. Likewise, the design of an application should be able to be open to all - the security of the system shouldn’t be the result of obfuscation, but rather the result of good design and strong security controls.
Separation of Privilege. Most systems are designed with hierarchical distribution of authority. This works well for administrative use. and is easy to manage. It’s not that secure, though. Not only do most systems use hierarchical controls, most applications that run on those systems use, usually, a single class of authority and privilege. Essentially, in systems like this, if one part of the system is successfully attacked, the entire system is compromised. A better approach, especially when used in tandem with something like complete mediation, is to separate the authority needed to accomplish system actions between some number of actors. That way, if one component is compromised, the system can still operate. The military has used this kind of approach to secure access to weapons for decades with great success.
Least Privilege. Of all the security principles Saltzer and Schroeder outlined, this is arguably the most well known. The principle of least privilege, when followed, limits the authority of system actors and components to only that needed for them to operate - no more, no less. This way, if something’s compromised, the overall damage to the system is limited.
Least Common Mechanism. A few of Saltzer and Schroeder’s principles really fall more in the domain of system design and implementation more than security controls. Which is appropriate - after all, a system that is poorly designed and implemented from cyber-security perspective isn’t securable no matter how many controls engineers bolt on. The idea behind this principle was to, again, limit the damage to the system when part of that system is compromised. Least common mechanism, in some ways, embodies the tension between system implementation and security concerns. After all, a system with the smallest possible common mechanism has no reuse. That said, it’s stll important and worth keeping in mind, especially at scale. Although virtualization can be more of a security threat than a security control, new techniques (like container-based virtualization) provide new opportunities to limit mechanism use across too broad a swath of a system.
Psychological Acceptability. Basically, a psychologically acceptable application is an application where users are comfortable doing the right thing. Any application that is overly complex or difficult to use leads to users finding and using shortcuts, and those shortcuts are frequently things you don’t want them doing. In any system anywhere, you want to make it as easy as possible for your users to do the right thing. This especially applies to security controls - we’ve all seen users share passwords between systems, or worse, with each other. We can’t eliminate the need to authenticate, for example, but we should make it as easy to authenticate correctly as possible.
Work Factor. In their original paper, Saltzer and Schroeder included two additional principles that they didn’t consider to be core security principles. Work factor is one of them, and compromise recording is the other. Work factor is a common cryptographic measure used to determine the strentgh of a given cipher. It doesn’t map directly to general cyber-security, but the overall concept does apply. Many security controls try to make things more difficult for attackers, more expensive, or cause them to spend more time in positions where they're vulnerable to detection. Dynamic and Moving Target Defense is based on this core idea. If systems can be designed to increase the work required for compromise, they are arguably more secure.
Compromise Recording. Although not a core principle, compromise recording, logging, and auditing is more important today than ever. Most cyber-security engineers today don’t believe that we can adequately lock down systems so that they can’t be compromised. As a result, we need to make sure we collect enough information to enable robust forensics for later analysis and system recovery
That’s it! That’s the original 10 security principles Saltzer & Schroeder published, back in 1975. These principles have been carried forward through the years in academic citations and used as inspiration for more modern work. Nevertheless, this is the first foray into cyber-security principles, and the first time we looked at cyber-security as an important and manageable attribute of modern computer systems.
Opinions expressed by DZone contributors are their own.
Trending
-
Unlocking Game Development: A Review of ‘Learning C# By Developing Games With Unity'
-
Top 10 Engineering KPIs Technical Leaders Should Know
-
What Is JHipster?
-
The Dark Side of DevSecOps and Why We Need Governance Engineering
Comments