Data-Centric Microservices Security
Data-Centric Microservices Security
Tokenization, when applied together with encryption, is ability to give us quite a good amount of confidence in data at rest security.
Join the DZone community and get the full member experience.Join For Free
Protect your applications against today's increasingly sophisticated threat landscape.
Segregate Sensitive Data
Security-sensitive apps such as apps processing financial, personal, or healthcare information, whether they are monolithic or distributed they all introduce information security risks — risks that mean that sensitive data might be compromised. Each line of code or infrastructure component can introduce a security vulnerability.
While we cannot reduce the amount of our code or remove a part of infrastructure without affecting system performance, stability, or functionality we can reduce pieces of software that deal with sensitive data. Microservices architecture highly compliments this approach. Give attention during the design phase to classifying your data. Segregate sensitive information from other information and think if you can reduce the amount of microservices that will process this data. For example, you can create dedicated microservices for processing and storing personal identifiable information (PII) so that only this component will ever touch any real values of PII. Since microservices approach gives a high level of isolation, you isolate and reduce the risk of data compromise.
The receipt of sensitive data segregation, which is actually a form of least common mechanism, stays closely with an approach of building microservices around bounded contexts and aggregation roots and just adds security perspective to the subject.
Introduce Security Levels
Different microservices have different bounded contexts, different amounts of capabilities, and serve different purposes, being implemented with different tools and languages by different teams. Then, naturally, we can care about their security in a different way, as well. While some microservices handle highly sensitive data like security tokens and credit cards, others might be only with public information or be somewhere in the middle. The logical choice would be not to spend resources on security of different microservices in the same way. “Good enough security” is exactly what we need here.
The approach that works especially well with organizations and systems on a high scale is introducing standard security levels for microservices. Many security standards complement this approach. OWASP ASVS introduces three application levels, Cigital BSIMM introduces three maturity levels, and NIST uses three levels to classify information security risks. So, a good idea would be to give three levels of security importance to microservices (for example, L1, L2, L3). Next steps would be assigning default security requirements and security controls to each level. These will be a good baseline to identify initial security needs for dozens or even hundreds of different microservices in a standardized way. Some high-level examples can look like this:
OWASP ASVS L1
- Awareness training.
- QA boundary testing.
- Make code review.
OWASP ASVS L2
- Use secure coding standards.
- Perform design review.
- Include security tests in QA automation.
- Schedule periodic penetration tests.
OWASP ASVS L3
- External pen testers.
- Security sign-off.
- Bug bounty program.
Introduce Trust Boundaries
It’s important to distinguish between authentication and authorization. Authentication is about verifying that the individual is actually who he or she claims to be, while authorization is about giving access to the system objects based on individual’s identity. Both look simple at first glance. Numerous security frameworks exist for various programming styles and languages and requirements who is able to do what are driven by business. However, when the system is distributed, additional questions might pop up.
First, you need to know about trust boundary concept. Trust boundary is a term that defines a virtual boundary where program execution changes its trust. When a user makes a request, initially, we should treat it as untrusted. We should authenticate the user and do an authorization check for the request. If both verifications succeed execution change, its trust and program execution become trusted. So, after that, we don’t need to do these verifications again with each step of request handling.
An interesting case is when we need to call another microservice. Suppose we already authorized the user request, but now we need to call another microservice, and then another. Shall we allow these calls already? Shall we do authorization check again? On the one hand, the request was already authorized, but on another, how we can be sure that this request is not sent by anybody else who was able to access our infrastructure? Here, we must revisit trust boundary concept.
Depending on how isolated is your infrastructure and what are your security risks are, you might treat this request as trusted or untrusted. If the fact that our internal service might be called by a malicious intruder is realistic and harmful, we must do authentication and authorization checks again. Here is where we cross another trust boundary.
In this kind of service-to-service communication case, we authenticate not the initial user but the service which is making the call and then do authorization check of this request. It is not necessary to share with another microservice information about the initial user who did the request since (usually) this is a part of isolated bounded context.
Technology choice for authentication and authorization for human-to-service and service-to-service communication also differs. While it is common to use login&password to authenticate humans for service authentication it would be better to use some token or certificate such as TLS client certificate. TLS client authentication requires certificate management and is very useful together with infrastructure automation. So define your trust boundaries and use proper tools for the purpose.
Encryption and Tokenization
From a security standpoint, we see three states of the data: data in use, data in transit, and data at rest. While it’s almost always a must to encrypt transmission over public networks with TLS or IPsec, it is not that straightforward with data at rest. What are people trying to protect when they encrypt the data at rest? This depends on the type of encryption. Selecting what type of encryption you need highly depends on your data, environment and threats you see. Even if data is encrypted on application-level, which eliminates a lot of attack scenarios, our weakest link is now the application; we can get the data by attacking application and not the database.
In some cases, we can improve this by separation of data flows. As an example, credit card information is considered a subject to PCI compliance only when primary account number (PAN) is present. The logic behind this is that cardholder data without PAN, in most cases, is useless to attackers and damage from a potential leak of such data is significantly reduced. On the other hand, PAN without cardholder name and explanation date as well doesn’t give so much to fraudsters. Following this logic, we should keep PAN separately from the rest of cardholder data as long as possible. Following this principle in microservice systems, you can reduce risks by keeping the most sensitive data (such as credit card PAN, SSN, passport ID, and security token) separately from other data in isolated microservices and link them together only when it’s absolutely required. To implement this Tokenization approach can be used:
"Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no extrinsic or exploitable meaning or value." — Wikipedia
Even if the attacker is able to access some part of the system, he or she will get only one part: either the tokens or rest of the data. So, tokenization, applied together with encryption, gives us quite good confidence in data at rest security.
Published at DZone with permission of Grygoriy Gonchar , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.