The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
If you want to become a smart contract developer on Ethereum, then you need to learn Solidity. Whether your goal is DeFi, blockchain gaming, digital collectibles (NFTs), or just web3 in general, Solidity is the foundational language behind the innovative projects on Ethereum. But where should you start? In this article, we’ll look at 10 great ways you can learn Solidity. Whether you're a beginner or an experienced web2 developer, this guide will help you not only get started with Solidity, but master it. We’ll look at all the best online courses, tutorials, documentation, and communities that can help you on your journey. First, however, if you’re new to web3, let’s provide some background on Solidity. Why Learn Solidity? Solidity is the language for writing smart contracts on the world’s most popular smart contract blockchain, Ethereum. And it’s not just for Ethereum. Multiple other blockchains, such as Avalanche and Binance Smart Chain, and L2s, such as Polygon and Optimism, are powered by Solidity. Learning Solidity not only opens up opportunities for you in Ethereum but also overall in the growing field of blockchain development. It’s a perfect skill to have for web3! So let’s look at 10 great ways to learn Solidity. #1 - ConsenSys 10-Minute Ethereum Orientation For a quick and thorough introduction (especially if you’re new to blockchain and Ethereum), check out the 10-Minute Ethereum Orientation by ConsenSys. This is the perfect starting point to orient yourself with the key terms of web3, the web3 tech stack, and how Solidity works into it all. ConsenSys is the company behind the most popular technologies of the web3 stack—MetaMask (the leading wallet), Infura (the leading web3 API), Truffle (dev and testing tools for Ethereum smart contracts), Diligence (blockchain security company) and more. Solidity can be confusing—but these are the tools that make it easy to use. ConsenSys is a well-known source for learning and we’ll get into that more in point six. #2 - CryptoZombies Once you complete your intro, check out CryptoZombies. This is the OG resource for learning Solidity. It’s a fun, interactive game that teaches you Solidity through building your own crypto-collectibles game. It’s an excellent starting point for beginners who are interested in developing decentralized applications and smart contracts on Ethereum. The course provides an easy-to-follow, step-by-step tutorial that guides you through the process of writing Solidity smart contracts. It even has gamification elements to keep you motivated. And it’s updated regularly. As Solidity adds features, they also add new learning materials. Modules on Oracles and Zero-knowledge Proofs (zk technology) are some recent additions to the curriculum. #3 - Speedrun Ethereum Next is Speedrun Ethereum, a series of gamified quests for learning Solidity. These quests cover topics such as NFTs, DeFi, tokens, and more—and it’s a lot of fun! It even covers more advanced Solidity concepts, including higher-order functions and inheritance. This course is great for intermediate-level Solidity learners who are familiar with the basics of Solidity. #4 - Solidity by Example Solidity by Example is a free resource that focuses on well-written code samples to teach Solidity. It provides a wide range of these examples, with each example explained in detail. This is less of a course and more of a resource for learning clean Solidity syntax. Highly recommended. Code samples vary from some simple to very advanced concepts. A good example—and one you can learn a lot from—is the full UniswapV2 contract. #5 - Dapp University If video is more your style, Dapp University is a Youtube channel with over 10 hours of hands-on tutorials. The tutorials are designed for both beginners and experienced Solidity developers. The tutorials cover topics such as setting up the development environment, writing Solidity smart contracts, and deploying them to the Ethereum blockchain. The content is well-structured and provides easy-to-follow instructions that guide you through the process of building your own decentralized applications. #6 - ConsenSys Academy By the same company mentioned in #1, ConsenSys Academy offers several online courses such as, Blockchain Essentials created to kick start your Solidity developer journey. They also offer the Blockchain Developer Program On-Demand course, where you’ll learn about the underpinnings of blockchain technology and how it all comes together to allow us to build the next generation of web applications. You’ll learn about the types of smart contract code, introduce you to key development tools, and show you all the best practices for smart contract development, all to prepare you for the final project towards the end of the course. Learners get hands-on experience with tools like Infura and Truffle Ganache, which are some of the most popular and widely used development tools in the Ethereum ecosystem that are focused on making Solidity easy to use. And as a product of ConsenSys, the Developer Program provides a direct link to the ConsenSys ecosystem, with access to some of the best resources and tools in the industry. #7 - Udemy Ethereum Blockchain Developer Bootcamp With Solidity This is another bootcamp, but in the form of an extensive Udemy course. It provides learners with up-to-date blockchain development tools, resources, and complete, usable projects to work on. The course is taught by an instructor who is a co-creator of the industry-standard Ethereum certification. This course is also updated frequently to reflect the latest changes in the ecosystem. #8 - Certified Solidity Developer Of course, there is always the certification path. Certified Solidity Developer is a certification offered by the Blockchain Council. It’s expensive, but it provides learners with a solid foundation in Solidity and smart contract development—and that piece of paper. The certification is well-recognized and is one of the most highly-rated blockchain developer accreditations. The course also provides learners with a deep understanding of smart contracts, their design patterns, and the various security implications of writing and deploying them on the Ethereum network. #9 - Official Solidity Documentation The Solidity documentation should not be underestimated. It’s an essential resource for those learning Solidity. It has the added value of always being up to date with the latest version of Solidity, and it contains detailed information about the Solidity programming language. The documentation is available in nine different languages, including Chinese, French, Indonesian, Japanese, and others. Undoubtedly, you’ll come back again and again to the Solidity documentation as you learn, so bookmark it now. #10 - Solidity Communities and Forums Finally, there are several solidity communities and forums that are excellent resources, such as CryptoDevHub and Solidity Forum. These communities are composed of Solidity experts, developers, and learners at all different levels. Ask questions, share knowledge, and collaborate on Solidity projects. By participating in these communities, you can keep up-to-date with the latest developments, gain insights into how other developers are approaching Solidity development, and make a few friends! Learning Solidity—Just Get Started! That’s a great start to your path. Learning Solidity is a valuable investment in your career. With these resources, you should be well on your way to learning Solidity, joining web3, and writing and deploying your first smart contracts. Of course, the fastest way to learn is to jump right in—so go for it! Have a really great day!
In the cloud-native era, we often hear that "security is job zero," which means it's even more important than any number one priority. Modern infrastructure and methodologies bring us enormous benefits, but, at the same time, since there are more moving parts, there are more things to worry about: How do you control access to your infrastructure? Between services? Who can access what? Etc. There are many questions to be answered, including policies: a bunch of security rules, criteria, and conditions. Examples: Who can access this resource? Which subnet egress traffic is allowed from? Which clusters a workload must be deployed to? Which protocols are not allowed for reachable servers from the Internet? Which registry binaries can be downloaded from? Which OS capabilities can a container execute with? Which times of day can the system be accessed? All organizations have policies since they encode important knowledge about compliance with legal requirements, work within technical constraints, avoid repeating mistakes, etc. Since policies are so important today, let's dive deeper into how to best handle them in the cloud-native era. Why Policy-as-Code? Policies are based on written or unwritten rules that permeate an organization's culture. So, for example, there might be a written rule in our organizations explicitly saying: For servers accessible from the Internet on a public subnet, it's not a good practice to expose a port using the non-secure "HTTP" protocol. How do we enforce it? If we create infrastructure manually, a four-eye principle may help. But first, always have a second guy together when doing something critical. If we do Infrastructure as Code and create our infrastructure automatically with tools like Terraform, a code review could help. However, the traditional policy enforcement process has a few significant drawbacks: You can't be guaranteed this policy will never be broken. People can't be aware of all the policies at all times, and it's not practical to manually check against a list of policies. For code reviews, even senior engineers will not likely catch all potential issues every single time. Even though we've got the best teams in the world that can enforce policies with no exceptions, it's difficult, if possible, to scale. Modern organizations are more likely to be agile, which means many employees, services, and teams continue to grow. There is no way to physically staff a security team to protect all of those assets using traditional techniques. Policies could be (and will be) breached sooner or later because of human error. It's not a question of "if" but "when." And that's precisely why most organizations (if not all) do regular security checks and compliance reviews before a major release, for example. We violate policies first and then create ex post facto fixes. I know, this doesn't sound right. What's the proper way of managing and enforcing policies, then? You've probably already guessed the answer, and you are right. Read on. What Is Policy-as-Code (PaC)? As business, teams, and maturity progress, we'll want to shift from manual policy definition to something more manageable and repeatable at the enterprise scale. How do we do that? First, we can learn from successful experiments in managing systems at scale: Infrastructure-as-Code (IaC): treat the content that defines your environments and infrastructure as source code. DevOps: the combination of people, process, and automation to achieve "continuous everything," continuously delivering value to end users. Policy-as-Code (PaC) is born from these ideas. Policy as code uses code to define and manage policies, which are rules and conditions. Policies are defined, updated, shared, and enforced using code and leveraging Source Code Management (SCM) tools. By keeping policy definitions in source code control, whenever a change is made, it can be tested, validated, and then executed. The goal of PaC is not to detect policy violations but to prevent them. This leverages the DevOps automation capabilities instead of relying on manual processes, allowing teams to move more quickly and reducing the potential for mistakes due to human error. Policy-as-Code vs. Infrastructure-as-Code The "as code" movement isn't new anymore; it aims at "continuous everything." The concept of PaC may sound similar to Infrastructure as Code (IaC), but while IaC focuses on infrastructure and provisioning, PaC improves security operations, compliance management, data management, and beyond. PaC can be integrated with IaC to automatically enforce infrastructural policies. Now that we've got the PaC vs. IaC question sorted out, let's look at the tools for implementing PaC. Introduction to Open Policy Agent (OPA) The Open Policy Agent (OPA, pronounced "oh-pa") is a Cloud Native Computing Foundation incubating project. It is an open-source, general-purpose policy engine that aims to provide a common framework for applying policy-as-code to any domain. OPA provides a high-level declarative language (Rego, pronounced "ray-go," purpose-built for policies) that lets you specify policy as code. As a result, you can define, implement and enforce policies in microservices, Kubernetes, CI/CD pipelines, API gateways, and more. In short, OPA works in a way that decouples decision-making from policy enforcement. When a policy decision needs to be made, you query OPA with structured data (e.g., JSON) as input, then OPA returns the decision: Policy Decoupling OK, less talk, more work: show me the code. Simple Demo: Open Policy Agent Example Pre-requisite To get started, download an OPA binary for your platform from GitHub releases: On macOS (64-bit): curl -L -o opa https://openpolicyagent.org/downloads/v0.46.1/opa_darwin_amd64 chmod 755 ./opa Tested on M1 mac, works as well. Spec Let's start with a simple example to achieve an Access Based Access Control (ABAC) for a fictional Payroll microservice. The rule is simple: you can only access your salary information or your subordinates', not anyone else's. So, if you are bob, and john is your subordinate, then you can access the following: /getSalary/bob /getSalary/john But accessing /getSalary/alice as user bob would not be possible. Input Data and Rego File Let's say we have the structured input data (input.json file): { "user": "bob", "method": "GET", "path": ["getSalary", "bob"], "managers": { "bob": ["john"] } } And let's create a Rego file. Here we won't bother too much with the syntax of Rego, but the comments would give you a good understanding of what this piece of code does: File example.rego: package example default allow = false # default: not allow allow = true { # allow if: input.method == "GET" # method is GET input.path = ["getSalary", person] input.user == person # input user is the person } allow = true { # allow if: input.method == "GET" # method is GET input.path = ["getSalary", person] managers := input.managers[input.user][_] contains(managers, person) # input user is the person's manager } Run The following should evaluate to true: ./opa eval -i input.json -d example.rego "data.example" Changing the path in the input.json file to "path": ["getSalary", "john"], it still evaluates to true, since the second rule allows a manager to check their subordinates' salary. However, if we change the path in the input.json file to "path": ["getSalary", "alice"], it would evaluate to false. Here we go. Now we have a simple working solution of ABAC for microservices! Policy as Code Integrations The example above is very simple and only useful to grasp the basics of how OPA works. But OPA is much more powerful and can be integrated with many of today's mainstream tools and platforms, like: Kubernetes Envoy AWS CloudFormation Docker Terraform Kafka Ceph And more. To quickly demonstrate OPA's capabilities, here is an example of Terraform code defining an auto-scaling group and a server on AWS: With this Rego code, we can calculate a score based on the Terraform plan and return a decision according to the policy. It's super easy to automate the process: terraform plan -out tfplan to create the Terraform plan terraform show -json tfplan | jq > tfplan.json to convert the plan into JSON format opa exec --decision terraform/analysis/authz --bundle policy/ tfplan.json to get the result.
Security is one of the key challenges in Kubernetes because of its configuration complexity and vulnerability. Managed container services like Google Kubernetes Engine (GKE) provide many protection features but don’t take all related responsibilities off your plate. Read on to learn more about GKE security and best practices to secure your cluster. Basic Overview of GKE Security GKE protects your workload in many layers, which include your container image, its runtime, the cluster network, and access to the cluster API server. That’s why Google recommends a layered approach to protecting your clusters and workloads. Enabling the right level of flexibility and security for your organization to deploy and maintain workloads may require different tradeoffs, as some settings may be too constraining. The most critical aspects of GKE security involve the following: Authentication and authorization; Control plane security, including components and configuration; Node security; Network security. These elements are also reflected in CIS Benchmarks, which help to structure work around security configurations for Kubernetes. Why Are CIS Benchmarks Crucial for GKE Security? Handling K8s security configuration isn’t exactly a walk in the park. Red Hat 2022 State of Kubernetes and Container Security found that almost one in four serious issues were vulnerabilities that could be remediated. Nearly 70% of incidents happened due to misconfigurations. Since its release by the Center of Internet Security (CIS), Benchmarks have become globally recognized best practices for implementing and managing cybersecurity mechanisms. The CIS Kubernetes Benchmark involves recommendations for K8s configuration that support a strong security posture. Written for the open-source Kubernetes distribution, it intends to be as universally applicable as possible. CIS GKE Benchmarking in Practice With a managed service like GKE, not all items on the CIS Benchmark are under your control. That’s why there are recommendations that you cannot audit or modify directly on your own. These involve: The control plane; The Kubernetes distribution; The nodes’ operating system. However, you still have to take care of upgrading the nodes that run your workloads and, of course, the workloads themselves. You need to audit and remediate any recommendations to these components. You could do it manually or use a tool that handles CIS benchmarking. With CAST AI’s container security module, for example, you can get an overview of benchmark discrepancies within minutes of connecting your cluster. The platform also prioritizes the issues it identifies, so you know which items require remediation first. When scanning your cluster, you also check it against the industry’s best practices, so you can better assess your overall security posture and plan further GKE hardening. Top 10 strategies to Ensure GKE Security 1. Apply the Principle of Least Privilege This basic security tenet refers to granting a user account only the privileges that are essential to perform the intended function. It comes in CIS GKE Benchmark 6.2.1: Prefer not running GKE clusters using the Compute Engine default service account. By default, your nodes get access to the Compute Engine service account. Its broad access makes it useful to multiple applications, but it also has more permissions than necessary to run your GKE cluster. That’s why you must create and use a minimally privileged service account instead of the default one – and follow suit in other contexts, too. 2. Use RBAC to Strengthen Authentication and Authorization GKE supports multiple options for managing access to your clusters with role-based access control (RBAC). RBAC enables more granular access to Kubernetes resources at cluster and namespace levels, but it also lets you create detailed permission policies. CIS GKE Benchmark 6.8.4 underscores the need to give preference to RBAC over the legacy Attribute Based Access Control (ABAC). Another CIS GKE Benchmark (6.8.3) recommends using groups to manage users as it simplifies controlling identities and permissions. It also removes the need to update the RBAC configuration whenever users are added or removed from the group. 3. Enhance Your Control Plane’s Security Under the Shared Responsibility Model, Google manages the GKE control plane components for you. However, you remain responsible for securing your nodes, containers, and pods. By default, the Kubernetes API server uses a public IP address. You can protect it by using authorized networks and private clusters, which enable you to assign a private IP address. You can also enhance your control plane’s security by doing a regular credential rotation. When you initiate the process, the TLS certificates and cluster certificate authority are rotated automatically. 4. Upgrade Your GKE Infrastructure Regularly Kubernetes frequently releases new security features and patches, so keeping your K8s up-to-date is one of the simplest ways to improve your security posture. GKE patches and upgrades the control planes for you automatically. Node auto-upgrade also automatically upgrades nodes in your cluster. CIS GKE Benchmark 6.5.3 recommends keeping that setting on. If for any reason, you need to disable the auto-upgrade, Google advises performing upgrades monthly and following the GKE security bulletins for critical patches. 5. Protect Node Metadata CIS GKE Benchmarks 6.4.1 and 6.4.2 refer to two critical factors compromising your node security, which is still your responsibility. The v0.1 and v1beta1 Compute Engine metadata server endpoints were deprecated and shut down in 2020 as they didn’t enforce metadata query headers. Some attacks against Kubernetes rely on access to the VM’s metadata server to extract credentials. You can prevent those attacks with Workload identity or Metadata Concealment. 6. Disable the Kubernetes Dashboard Some years back, the world was electrified by the news of attackers gaining access to Tesla’s cloud resources and using them to mine cryptocurrency. The vector of attack, in that case, was a Kubernetes dashboard, which was exposed to the public with no authentication or elevated privileges. Complying with CIS GKE Benchmark 6.10.1 is recommended if you want to avoid following Tesla’s plight. This standard clearly outlines that you should disable Kubernetes web UI when running on GKE. By default, GKE 1.10 and later disable the K8s dashboard. You can also use the following code: gcloud container clusters update CLUSTER_NAME \ --update-addons=KubernetesDashboard=DISABLED 7. Follow the NSA-CISA Framework CIS Kubernetes Benchmark gives you a strong foundation for building a secure operating environment. But if you want to go further, make space for NSA-CISA Kubernetes Hardening Guidance in your security procedures. The NSA-CISA report outlines vulnerabilities within a Kubernetes ecosystem and recommends best practices for configuring your cluster for security. It presents recommendations on vulnerability scanning, identifying misconfigurations, log auditing, and authentication, helping you to ensure that you appropriately address common security challenges. 8. Improve Your Network Security Most workloads running in GKE need to communicate with other services running inside and outside the cluster. However, you can control the traffic allowed to flow through your clusters. First, you can use network policies to limit pod-to-pod communication. By default, all cluster pods can be reached over the network via their pod IP address. You can lock down the connection in a namespace by defining traffic flowing through your pods and stopping it for those that don’t match the configured labels. Second, you can balance your Kubernetes pods with a network load balancer. To do so, you create a LoadBalancer service matching your pod’s labels. You will have an external-facing IP mapping to ports on your Kubernetes Pods, and you’ll be able to filter authorized traffic at the node level with kube-proxy. 9. Secure Pod Access to Google Cloud Resources Your containers and pods might need access to other resources in Google Cloud. There are three ways to do this: with Workload Identity, Node Service Account, and Service Account JSON Key. The simplest and most secure option to access Google Cloud resources is by using Workload Identity. This method allows your pods running on GKE to get permissions on the Google Cloud service account. You should use application-specific Google Cloud service accounts to provide credentials so that applications have the minimal necessary permissions that you can revoke in case of a compromise. 10. Get a GKE-Configured Secret Manager CIS GKE Benchmark 6.3.1. recommends encrypting Kubernetes Secrets using keys managed in Cloud KMS. Google Kubernetes Engine gives you several options for secret management. You can use Kubernetes secrets natively in GKE, but you can also protect these at an application layer with a key you manage and application-layer secret encryption. There are also secrets managers like Hashicorp Vault, which provide a consistent, production-ready way to manage secrets in GKE. Make sure you check your options out and pick an optimal solution. Assess GKE Security Within Minutes The Kubernetes ecosystem keeps growing, but so are its security configuration challenges. If you want to stay on top of GKE container security, you need to be able to identify potential threats and track them efficiently. Kubernetes security reports let you scan your GKE cluster against CIS benchmark, NSA-CISA framework, and other container security best practices to identify vulnerabilities, spot misconfigurations, and prioritize them. It only takes a few minutes to get a complete overview of your cluster’s security posture.
At this point, we’ve all heard the horror stories about clicking on malicious links, and if we’re unlucky enough, perhaps we’ve been the subject of one of those stories. Here’s one we’ll probably all recognize: an unsuspecting employee receives an email from a seemingly trustworthy source, and this email claims there’s been an attempt to breach one of their most important online accounts. The employee, feeling an immediate sense of dread, clicks on this link instinctively, hoping to salvage the situation before management becomes aware. When they follow this link, they’re confronted with a login interface they’re accustomed to seeing – or so they believe. Entering their email and password is second nature: they input this information rapidly and click “enter” without much thought. In their rush, this employee didn’t notice that the login interface looks very different than normal. Further, they’ve overlooked that the email address alerting them to this account “breach” contained 10 more characters than it would have if it had come from the account provider. On top of all that, they’ve failed to see that the link itself – a mix of tightly packed letters, symbols, and words which, in truth, they’ve hardly glanced at in the best of circumstances – contains improper spellings and characters all over the place. In about 30 seconds, this employee has unwittingly compromised an account with access to some of their employer’s most sensitive data, handing their login details to a cybercriminal far away who will, no doubt, waste little time in exploiting the situation for monetary gain. A boilerplate email phishing scenario such as this – the most basic example of a tried-and-true social engineering tactic, dating back to the early days of the internet – is just one of many threats involving URLs that continues to drive immense scrutiny around the origin and dissemination of malicious links. As the internet has scaled, the utility of URLs has grown in lockstep. We use URLs to share important content with our friends, colleagues, managers, clients, and customers all the time, quietly ensuring that URLs can continue to expand in their role as vehicles for social engineering scams, viruses, malware, and various other forms of cybersecurity threats. From this scrutiny, a culture of individual accountability has predominantly emerged: we, the targets of threatening URLs, are (justifiably) viewed as the most pivotal barrier between attack and breach. As a result, at an organizational level, the most important and common step taken to mitigate this issue involves training users on how to spot fraudulent links on their own. Employees of companies in diverse industries all over the world are increasingly taught to identify the obvious signs of malicious links (and social engineering/untrustworthy outreach), a practice which has, no doubt, proved highly beneficial in reducing instances of URL-driven breach. However, the vast criminal potential of URLs means user training isn’t quite enough to mitigate the issue entirely. To properly secure our invaluable data, we need to proactively implement security policies that can accurately identify and flag URL-based threats on their own. Like the tendencies of living viruses, the underlying strategies of URL threats (and all cybersecurity threats) inexorably evolve to defeat their victims, diminishing the utility of past security training until their relevance is dubious at best. For example, URLs are increasingly used as a lightweight method for sharing files across a network. When we receive a file link from someone we trust (regularly receive files from), we have little reason to believe that link may be compromised, and – despite all our intense security training – we are still very much in danger of clicking on it. Unbeknownst to us, this link may contain a malicious ForcedDownload file that seeks to capitalize on our brief error in judgment and compromise our system before we can react. While individual accountability means blunders such as this should (and will) be considered our fault in the short term, that blame has a limited ability to deter the issue as it continues to evolve. The person who sent this link to us may have received it from a source they usually trust, and that source may have received it from someone they also usually trust, and someone towards the beginning of that chain of communication may not have had any security training at their job whatsoever, blindly forwarding links from a source they believed to be valuable but had never actually investigated before. Just as it’s important for us to assume links such as this might be dangerous, it’s equally important for our system’s security policies to assume the same, and to act against those links as diligently as possible before they reach a human layer of discretion. To that end, URL security APIs can play a key role, offering an efficient, value-add service to our application architecture while removing some of the burdens on our users to prevent malicious links from compromising our systems by themselves. Demonstration The purpose of this article is to provide a powerful, free-to-use REST API that scans website URLs for various forms of threats. This API accepts a website link (beginning with "http://" or "https://") string as input and returns key information about the contents of that URL in short order. The response body includes the following information: “CleanResult” – A Boolean indicating whether or not the link is clean, ensuring this link can be diverted immediately from its intended destination“WebsiteThreatType,” a string value identifying if the underlying threat within the link is of the Malware, ForcedDownload, or Phishing variety (clean links will return “none”) “FoundViruses” – A subsection of viruses (“VirusName”) found within a given file URL (“FileName”), and the name of those viruses “WebsiteHttpResponseCode” – The three-digit HTTP response code returned by the link To complete a free API request, a free-tier API is required, and that can be obtained by registering a free account on the Cloudmersive website (please note, this yields a limit of 800 API calls per month with no commitments). To take advantage of this API, follow the steps below to structure your API call in Java using complementary, ready-to-run code examples. To begin, your first step is to install the Java SDK. To install with Maven, add the below reference to the repository in pom.xml: <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> To complete the installation with Maven, next add the following reference to the dependency in pom.xml: <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> To install with Gradle instead, add it to your root build.gradle at the end of repositories: allprojects { repositories { ... maven { url 'https://jitpack.io' } } } Following that, next, add the dependency in build.gradle, and you’re all done with the installation step: dependencies { implementation 'com.github.Cloudmersive:Cloudmersive.APIClient.Java:v4.25' } With installation out of the way, our next step is to add the imports and call the Virus Scan API: // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ScanApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ScanApi apiInstance = new ScanApi(); WebsiteScanRequest input = new WebsiteScanRequest(); // WebsiteScanRequest | try { WebsiteScanResult result = apiInstance.scanWebsite(input); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ScanApi#scanWebsite"); e.printStackTrace(); } After that, you’re all done – no more code is required.
With growing concern regarding data privacy and data safety today, Internet of Things (IoT) manufacturers have to up their game if they want to maintain consumer trust. This is the shared goal of the latest cybersecurity standard from the European Telecommunications Standards Institute (ETSI). Known as ETSI EN 303 645, the standard for consumer devices seeks to ensure data safety and achieve widespread manufacturer compliance. So, let’s dive deeper into this standard as more devices enter the home and workplace. The ETSI Standard and Its Protections It counts a long name but heralds an important era of device protection. ETSI EN 303 645 is a standard and method by which a certifying authority can evaluate IoT device security. Developed as an internationally applicable standard, ETSI offers manufacturers a baseline for security rather than a comprehensive set of precise guidelines. The standard may also lay the groundwork for various future IoT cybersecurity certifications in different regions around the world. For example, look at what’s happening in the European Union. Last September, the European Commission introduced a proposed Cyber Resilience Act, intended to protect consumers and businesses from products with inadequate security features. If passed, the legislation — a world-first on connected devices — will bring mandatory cybersecurity requirements for products with digital elements throughout their whole lifecycle. The prohibition of default and weak passwords, guaranteed support of software updates and mandatory testing for security vulnerabilities are just some of the proposals. Interestingly, these same rules are included in the ETSI standard. IoT Needs a Cybersecurity Standard Shockingly, a single home filled with smart devices could experience as many as 12,000 cyber attacks in a single week. While most of those cyber attacks will fail, the sheer number means some inevitably get through. The ETSI standard strives to keep those attacks out with basic security measures, many of which should already be common sense, but unfortunately aren’t always in place today. For example, one of the basic requirements of the ETSI standard is no universal default passwords. In other words, your fitness tracker shouldn’t have the same default password as every other fitness tracker of that brand on the market. Your smart security camera shouldn’t have a default password that anyone who owns a similar camera could exploit. It seems like that would be common sense for IoT manufacturers, but there have been plenty of breaches that occurred simply because individuals didn’t know to change the default passwords on their devices. Another basic requirement of ETSI is allowing individuals to delete their own data. In other words, the user has control over the data a company stores about them. Again, this is pretty standard stuff in the privacy world, particularly in light of regulations like Europe’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA). However, this is not yet a universal requirement for IoT devices. Considering how much health- and fitness-related data many of these devices collect, consumer data privacy needs to be more of a priority. Several more rules in ETSI have to do with the software installed on such devices and how the provider manages security for the software. For example, there needs to be a system for reporting vulnerabilities. The provider needs to keep the software up to date and ensure software integrity. We would naturally expect these kinds of security measures for nearly any software we use, so the standard is basically just a minimum for data protection in IoT. Importantly, the ETSI standard covers pretty much everything that could be considered a smart device, including wearables, smart TVs and cameras, smart home assistants, smart appliances, and more. The standard also applies to connected gateways, hubs, and base stations. In other words, it covers the centralized access point for all of the various devices. Why Device Creators Should Implement the Standard Today Just how important is the security standard? Many companies are losing customers today due to a lack of consumer trust. There are so many stories of big companies like Google and Amazon failing to adequately protect user data, and IoT in particular has been in the crosshairs multiple times due to privacy concerns. An IoT manufacturer that doesn’t want to lose business, face fines and lawsuits, and damage the company's reputation should consider implementing the ETSI standard as a matter of course. After all, these days a given home might have as many as 16 connected devices, each an entry point into the home network. A company might have one laptop per employee but two, three, or more other smart devices per employee. And again, each smart device is a point of entry for malicious hackers. Without a comprehensive cybersecurity standard like ETSI EN 303 645, people who own unprotected IoT devices need to worry about identity theft, ransomware attacks, data loss and much more. How to Test and Certify Based on ETSI Certification is fairly basic and occurs in five steps: Manufacturers have to understand the 33 requirements and 35 recommendations of the ETSI standard and design devices accordingly. Manufacturers also have to buy an IoT platform that has been built with the ETSI standard in mind, since the standard will fundamentally influence the way the devices are produced and how they operate within the platform. Next, any IoT manufacturer trying to meet the ETSI standard has to fill out documents that provide information for device evaluation. The first document is the Implementation Conformance Statement, which shows which requirements and recommendations the IoT device does or doesn’t meet. The second is the Implementation eXtra Information for Testing, which provides design details for testing. A testing provider will next evaluate and test the product based on the two documents and give a report. The testing provider will provide a seal or other indication that the product is ETSI EN 303 645-compliant. With new regulations on the horizon, device manufacturers and developers should see it as best practice to get up to speed with this standard. Better cybersecurity is not only important for consumer protection but brand reputation. Moreover, this standard can provide a basis for stricter device security certifications and measures in the future. Prepare today for tomorrow.
Application Dependency Mapping is the process of creating a graphical representation of the relationships and dependencies between different components of a software application. This includes dependencies between modules, libraries, services, and databases. It helps to understand the impact of changes in one component on other parts of the application and aids in troubleshooting, testing, and deployment. Software Dependency Risks Dependencies are often necessary for building complex software applications. However, development teams should be mindful of dependencies and seek to minimize their number and complexity for several reasons: Security vulnerabilities: Dependencies can introduce security threats and vulnerabilities into an application. Keeping track of and updating dependencies can be time-consuming and difficult. Compatibility issues: Dependencies can cause compatibility problems if their versions are not managed properly. Maintenance overhead: Maintaining a large number of dependencies can be a significant overhead for the development team, especially if they need to be updated frequently. Performance impact: Dependencies can slow down the performance of an application, especially if they are not optimized. Therefore, it's important for the development team to carefully map out applications and their dependencies, keep them up-to-date, and avoid using unnecessary dependencies. Application security testing can also help identify security vulnerabilities in dependencies and remediate them. Types of Software Dependencies Functional Functional dependencies are a type of software dependencies that are required for the proper functioning of a software application. These dependencies define the relationships between different components of the software and ensure that the components work together to deliver the desired functionality. For example, a software component may depend on a specific library to perform a specific task, such as connecting to a database, performing a calculation, or processing data. The library may provide a specific function or set of functions that the component needs to perform its task. If the library is unavailable or the wrong version, the component may not be able to perform its task correctly. Functional dependencies are important to consider when developing and deploying software because they can impact the functionality and usability of the software. It's important to understand the dependencies between different components of the software and to manage these dependencies effectively in order to ensure that the software works as expected. This can involve tracking the dependencies, managing version compatibility, and updating dependencies when necessary. Development and Testing Development and testing dependencies are software dependencies that are required during the development and testing phases of software development but are not required in the final deployed version. For example, a developer may use a testing library, such as JUnit or TestNG, to write automated tests for the software. This testing library is only required during development and testing but is not needed when the software is deployed. Similarly, a developer may use a build tool, such as Gradle or Maven, to manage the dependencies and build the software. This build tool is only required during development and testing but is not needed when the software is deployed. Development and testing dependencies are important to consider because they can impact the development and testing process and can add complexity to the software. It's important to understand and manage these dependencies effectively in order to ensure that the software can be developed, tested, and deployed effectively. This can involve tracking the dependencies, managing version compatibility, and updating dependencies when necessary. Additionally, it's important to ensure that development and testing dependencies are not included in the final deployed version of the software in order to minimize the size and complexity of the deployed software. Operational and Non-Functional Operational dependencies are dependencies that are required for the deployment and operation of the software. For example, an application may depend on a specific version of an operating system, a specific version of a web server, or a specific version of a database. These dependencies ensure that the software can be deployed and run in the desired environment. Non-functional dependencies, on the other hand, are dependencies that relate to the non-functional aspects of the software, such as performance, security, and scalability. For example, an application may depend on a specific version of a database in order to meet performance requirements or may depend on a specific security library in order to ensure that the application is secure. It's important to understand and manage both operational and non-functional dependencies effectively in order to ensure that the software can be deployed and run as expected. This can involve tracking the dependencies, managing version compatibility, and updating dependencies when necessary. Additionally, it's important to ensure that non-functional dependencies are configured correctly in order to meet the desired performance, security, and scalability requirements. 5 Benefits of Application Mapping for Software Projects Improved Understanding of the Project One of the primary benefits of application mapping is that it helps team members better understand the system as a whole. The visual representation of the relationships and interactions between different components can provide a clear picture of how the system operates, making it easier to identify areas for improvement or optimization. This can be especially useful for new team members, who can quickly get up to speed on the system without having to spend a lot of time reading through documentation or trying to decipher complex code. Facilitated Collaboration Another benefit of application mapping is that it can be used as a tool for communication and collaboration between different stakeholders involved in the software project. By providing a visual representation of the system, application mapping can help to foster a shared understanding between developers, business stakeholders, and other stakeholders, improving collaboration and reducing misunderstandings. Early Identification of Problems Application mapping can also help to identify potential issues early in the project before they become significant problems. By mapping out the relationships between different components, it is possible to identify areas where conflicts or dependencies could cause problems down the line. This allows teams to address these issues before they become major roadblocks, saving time and reducing the risk of delays in the project. Increased Efficiency Another benefit of application mapping is that it can help to optimize workflows and processes, reducing duplication and improving the efficiency of the overall system. By mapping out the flow of data and interactions between different components, it is possible to identify areas where processes can be streamlined or made more efficient, reducing waste and improving performance. Better Decision-Making Application mapping can be used to make informed decisions about future development and changes to the system. By allowing teams to understand the potential impact of changes to one part of the system on other parts, application mapping can help to reduce the risk of unintended consequences and ensure that changes are made with a full understanding of their impact on the overall system. This can help to improve the quality of the final product and reduce the risk of costly mistakes. Conclusion In conclusion, application mapping provides a clear and visual representation of the software architecture and the relationships between different components. This information can be used to improve understanding, facilitate collaboration, identify problems early, increase efficiency, and support better decision-making.
Introduction Microsoft 365 is a popular productivity suite used by organizations of all sizes. While it offers a wealth of features and benefits, it also poses security challenges, especially in terms of protecting user data. With cyber threats on the rise, it's more important than ever to ensure that your Microsoft 365 user accounts and data are secure. In this article, we'll provide a step-by-step guide to help you safeguard your Microsoft 365 environment against data loss. We'll cover the threat landscape, Microsoft 365 security features, best practices for securing user accounts, and data backup solutions for Microsoft 365. With the information and recommendations provided in this guide, you'll be well-equipped to protect your organization's valuable data and ensure business continuity. Understanding the Threat Landscape Data security is a critical issue for all organizations that use Microsoft 365. With the increasing sophistication of cyber threats, it's essential to be aware of the potential risks to your user accounts and data. The following are some of the common types of data loss that organizations face in a Microsoft 365 environment: Ransomware attacks: Ransomware is a type of malware that encrypts files and demands payment in exchange for the decryption key. This type of attack can be devastating, as it can lead to the permanent loss of data. Phishing attacks: Phishing attacks are designed to trick users into disclosing their login credentials or personal information. These attacks can be delivered through email, instant messaging, or malicious websites and can result in unauthorized access to user accounts and data. Insider threats: Insider threats can occur when a current or former employee with access to sensitive data deliberately or accidentally misuses that data. Data breaches: Data breaches can occur when unauthorized individuals gain access to sensitive data. This can be due to a lack of security measures or a security breach at a third-party provider. It's important to be aware of these threats and take proactive measures to protect your Microsoft 365 environment against data loss. In the next section, we'll discuss the security features that are available in Microsoft 365 to help you protect your data. Microsoft 365 Security Features Microsoft 365 offers a variety of security features to help protect user accounts and data. These features include: Multi-Factor Authentication (MFA): MFA is a security process that requires users to provide two or more authentication factors when accessing their accounts. This can include a password and a security code sent to their phone, for example. Enabling MFA helps to prevent unauthorized access to user accounts. Data Encryption: Microsoft 365 uses encryption to protect data both in transit and at rest. Data in transit is encrypted as it travels between users and Microsoft 365, while data at rest is encrypted on Microsoft's servers. Threat Protection: Microsoft 365 includes threat protection features, such as Advanced Threat Protection (ATP), that help to prevent malware and other threats from entering your environment. ATP uses artificial intelligence and machine learning to identify and block threats before they can cause damage. Compliance and Auditing: Microsoft 365 provides compliance and auditing features that help organizations meet regulatory requirements and monitor user activity. These features include audit logs, retention policies, and eDiscovery capabilities. By taking advantage of these security features, organizations can significantly reduce the risk of data loss in their Microsoft 365 environment. However, it's important to note that these features alone are not enough to fully protect user accounts and data. In the next section, we'll discuss best practices for securing user accounts in Microsoft 365. Best Practices for Securing User Accounts In addition to using the security features provided by Microsoft 365, there are several best practices that organizations can follow to help secure their user accounts and data: Use strong passwords: Encourage users to create strong, unique passwords and avoid using the same password for multiple accounts. Consider implementing password policies that enforce the use of strong passwords. Enable multi-factor authentication: Require all users to enable MFA on their accounts to help prevent unauthorized access. Restrict access to sensitive data: Use role-based access controls and other security measures to restrict access to sensitive data to only those users who need it. Keep software up to date: Regularly update all software, including Microsoft 365, to ensure that security vulnerabilities are patched. Educate users: Provide regular training to users on how to identify and avoid phishing attacks, as well as how to secure their accounts and devices. By following these best practices, organizations can help to minimize the risk of data loss in their Microsoft 365 environment. However, it's also important to have a backup plan in place in case of an unexpected disaster. In the next section, we'll discuss data backup solutions for Microsoft 365. Data Backup Solutions for Microsoft 365 Having a backup plan in place is an essential part of protecting against data loss in Microsoft 365. There are several data backup solutions available for Microsoft 365, including: Microsoft 365 Backup: Microsoft 365 Backup is a built-in backup solution for Microsoft 365 that provides backup and recovery for Exchange Online, SharePoint Online, and OneDrive for Business. This solution can be managed from the Microsoft 365 admin center and provides options for backing up data on a schedule, as well as for recovering data in the event of accidental deletion or data loss. Third-party backup solutions: There are also several third-party backup solutions available for Microsoft 365. These solutions offer advanced backup and recovery features, such as the ability to recover individual items, complete site collections, or entire SharePoint sites. Regardless of the solution you choose, it's important to regularly test your backup and recovery processes to ensure that you can quickly recover data in the event of a disaster. In conclusion, securing user accounts and data in Microsoft 365 requires a combination of security features, best practices, and backup solutions. By following the recommendations outlined in this article, organizations can significantly reduce the risk of data loss in their Microsoft 365 environment and ensure business continuity. Conclusion In today's digital world, securing user accounts and data is more important than ever. Microsoft 365 offers a range of security features, such as multi-factor authentication, data encryption, threat protection, and compliance and auditing, to help organizations protect their data. Additionally, following best practices such as using strong passwords, restricting access to sensitive data, and educating users can further enhance security. However, even with the best security measures in place, disasters can still occur. That's why it's important to have a backup plan in place. Microsoft 365 Backup and third-party backup solutions can help organizations recover data in the event of a disaster and ensure business continuity. In conclusion, protecting user accounts and data in Microsoft 365 requires a multi-layered approach that includes security features, best practices, and a backup plan. By following these recommendations, organizations can help to minimize the risk of data loss and ensure the protection of their critical data and user accounts.
In the previous article, we looked at signed JSON Web Tokens and how to use them for cross-service authorization. But sometimes, there are situations when you need to add sensitive information to a token that you would not want to share with other systems. Or such a token can be given to the user's device (browser, phone). In this case, the user can decode the token and get all the information from the payload. One solution to such a problem could be the use of JSON Web Encryption (JWE), the full specification of which can be found in RFC7516. JSON Web Encryption (JWE) JWE is an encrypted version of JWT and looks like this: It consists of the following parts separated by a dot: BASE64URL(UTF8(JWE Protected Header)) || '.' || BASE64URL(JWE Encrypted Key) || '.' || BASE64URL(JWE Initialization Vector) || '.' || BASE64URL(JWE Ciphertext) || '.' || BASE64URL(JWE Authentication Tag) JWE Protected Header For example: { "enc": "A256GCM", "alg": "RSA-OAEP-256" } where alg – The Content Encryption Key is encrypted to the recipient using the RSAES-OAEP algorithm to produce the JWE Encrypted Key. enc – Authenticated encryption is performed on the plaintext using the AES GCM algorithm with a 256-bit key to produce the ciphertext and the Authentication Tag. JWE Encrypted Key Encrypted Content Encryption Key value. JWE Initialization Vector Randomly generated value needed for the encryption process. JWE Ciphertext Encrypted payload. JWE Authentication Tag Computed during the encryption process and used to verify integrity. Token Generation There are many libraries for many programming languages to work with JWE tokens. Let's consider as an example the Nimbus library. build.gradle: implementation 'com.nimbusds:nimbus-jose-jwt:9.25.6' The payload can be represented as a set of claims: { "sub": "alice", "iss": "https://idp.example.org", "exp": 1669541629, "iat": 1669541029 } Let's generate a header: Java JWEHeader header = new JWEHeader( JWEAlgorithm.RSA_OAEP_256, EncryptionMethod.A256GCM ); which corresponds to the following JSON: { "enc": "A256GCM", "alg": "RSA-OAEP-256" } Let's generate an RSA key: Java RSAKey rsaJwk = new RSAKeyGenerator(2048) .generate(); Using the public part of the key, we can create an Encrypter object, with which we encrypt the JWT: Java RSAEncrypter encrypter = new RSAEncrypter(rsaJwk.toRSAPublicKey()); EncryptedJWT jwt = new EncryptedJWT(header, jwtClaims); String jweString = jwt.encrypt(encrypter); Execution result: eyJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiUlNBLU9BRVAtMjU2In0.O01BFr_XxGzKEUb_Z9vQOW3DX2cQFxojrRy2JyM5_nqKnrpAa0rvcPI_ViT2PdPRogBwjHGRDM2uNLd1BberKQlaZYuqPGXnpzDQjosF0tQlgdtY3uEZUMT-9WPP8jCxxQg0AGIm4abkp1cgzAWBQzm1QYL8fwaz16MS48ExRz41dLhA0aEWE4e7TYzjrfaK8M4wIUlQCFIl-wS1N3U8W2XeUc9MLYGmHft_Rd9KJs1c-9KKdUQf6tEzJ92TGEC7TRZX4hGdtszIq3GGGBQaW8P9jPozqaDdrikF18D0btRHNf3_57sR_CPEGYX0O4mY775CLWqB4Y1adNn-fZ0xoA.ln7IYZDF9TdBIK6i.ZhQ3Q5TY827KFQw8DdRRzQVJVFdIE03B6AxMNZ1sQIjlUB4QUxg-UYqjPJESPUmFsODeshGWLa5t4tUri5j6uC4mFDbkbemPmNKIQiY5m8yc.5KKhrggMRm7ydVRQKJaT0g To decode a JWE token, you need to create a Decryptor object and pass to it the private part of the key: Java EncryptedJWT jwt = EncryptedJWT.parse(jweString); RSADecrypter decrypter = new RSADecrypter(rsaJwk.toPrivateKey()); jwt.decrypt(decrypter); Payload payload = jwt.getPayload(); Here, we used an asymmetric encryption algorithm — the public part of the key is used for encryption, and the private part for decryption. This approach allows issuing JWE tokens for third-party services and being sure that the data will be protected (when there are intermediaries in the token transmission path). In this case, the final service needs to publish the public keys, which we will use to encrypt the content of the token. To decrypt the token the service will use the private part of the key, which it will keep secret. But what if we have the same Issuer and Consumer token service? It could be a backend that sets a cookie in the user's browser with sensitive information. In that case, you don't need to use an asymmetric algorithm — you can use a symmetric algorithm. In JWE terms, this is direct encryption. Java JWEHeader header = new JWEHeader(JWEAlgorithm.DIR, EncryptionMethod.A128CBC_HS256); which corresponds to the following JSON: { "enc": "A128CBC-HS256", "alg": "dir" } Let's generate a 256-bit key: Java KeyGenerator keyGen = KeyGenerator.getInstance("AES"); keyGen.init(256); SecretKey key = keyGen.generateKey(); and encrypt JWT: Java JWEObject jweObject = new JWEObject(header, jwtClaims.toPayload()); jweObject.encrypt(new DirectEncrypter(key)); String jweString = jweObject.serialize() Execution Result: eyJlbmMiOiJBMTI4Q0JDLUhTMjU2IiwiYWxnIjoiZGlyIn0..lyJ_pcHfp8cz13TVav8MZQ.LmeN4jHxYg-dEFZ98PlVfNXFI29L5NGanA6ncALWcI9uDqpoXaaBcKeOKuzRayfQ3X7yPTuiMRHAUHMR5K3Rucmb8fQw2dkP3EONUg0lbdbmfbNwDbjQcWCGUWXfBWFg.v63pTlB7B15ZLEwSBwBUAg Note that direct encryption does not include the JWE Encrypted Key part of the token. To decrypt the token, you need to create a Decryptor from the same key: Java EncryptedJWT jwt = EncryptedJWT.parse(jweString); jwt.decrypt(new DirectDecrypter(key)); Payload payload = jwt.getPayload(); Performance To evaluate the performance of symmetric and asymmetric encryption algorithms, a benchmark was conducted using the JHM library. RSA_OAEP_256/A256GCM was chosen as the asymmetric algorithm, A128CBC_HS256 as the symmetric algorithm. The tests were run on a Macbook Air M1. The payload: { "iss": "https://idp.example.org", "sub": "alice", "exp": 1669546229, "iat": 1669545629 } Benchmark results: Benchmark Mode Cnt Score Error Units Asymmetric Decrypt thrpt 4 1062,387 ± 4,990 ops/s Asymmetric Encrypt thrpt 4 17551,393 ± 388,733 ops/s Symmetric Decrypt thrpt 4 152900,578 ± 1251,034 ops/s Symmetric Encrypt thrpt 4 122104,824 ± 5102,629 ops/s Asymmetric Decrypt avgt 4 0,001 ± 0,001 s/op Asymmetric Encrypt avgt 4 ≈ 10⁻⁴ s/op Symmetric Decrypt avgt 4 ≈ 10⁻⁵ s/op Symmetric Encrypt avgt 4 ≈ 10⁻⁵ s/op As expected, asymmetric algorithms are slower. According to the test results, more than ten times slower. Thus, if possible, symmetric algorithms should be preferred to increase performance. Conclusion JWE tokens are quite a powerful tool and allow you to solve problems related to secure data transfer while taking all the benefits of self-contained tokens. At the same time, it is necessary to pay attention to performance issues and choose the most appropriate algorithm and key length.
This post is part of a series dealing with Compliance Management. The previous post analyzed three approaches to Compliance and Policy Administration Centers. Two were tailored CPAC topologies that support specialized forms of policy. The third CPAC topology was for cloud environments and the attempt to accommodate the generic case of PVPs/PEPs with diverse native formats across heterogeneous cloud services and products. It is easy to see how these approaches can be used for configuration checks, but some controls require implementation that relies on higher-level concepts. In this article, we share our experience in authoring compliance policies that go deeper than configuration management. There are numerous tools for checking the compliance of cloud-native solutions, yet the problem is far from being solved. We know how to write rules to ensure that cloud infrastructure and services are configured correctly, but compliance goes deeper than configuration management. Building a correct network setup is arguably the most difficult aspect of building cloud solutions, and proving it to be compliant is even more challenging. One of the main challenges is that network compliance cannot be deduced by reasoning about the configuration of each element separately. Instead, to deduce compliance, we need to understand the relationships between various network resources and compute resources. In this blog post, we wanted to share a solution we developed to overcome these challenges because we believe this can be useful for anyone tackling the implementation of controls over network architectures. We are specifically interested in the problem of protecting boundaries for Kubernetes-based workloads running in a VPC, and we focus on the SC-7 control from the famous NIST 800-53. Boundaries are typically implemented using demilitarized zones (DMZs) that separate application workloads and the network they’re deployed in from the outside (typically the Internet) using a perimeter network that has very limited connectivity to both other networks. Because no connection can pass from either side without an active compute element (e.g., a proxy) in the perimeter network forwarding traffic, this construct inherently applies the deny by default principle and is guaranteed to fail secure in case the proxy is not available. Modern cloud-native platforms offer a broad range of software-defined networking constructs, like VPCs, Security Groups, Network ACLs, and subnets that could be used to build a DMZ. However, the guideline for compliance programs like FedRAMP is that only subnets are valid constructs for creating a boundary. Encapsulating the general idea of boundary protection is too difficult, as there are numerous ways to implement it and no way to automate enforcement and monitoring for violations. Instead, we have architectural guidelines that describe a particular method for building a compliant architecture using DMZs. This post shows how to automate compliance checking against a specific architectural design of DMZs. The core ideas should be easily transferable to other implementations of boundary protection. Our goal is to control access to public networks. The challenge is determining which parts of the architecture should have access to the external network. The solution is to have humans label network artifacts according to their intended use. Is this for sensitive workloads? Is this an edge device? Given the right set of labels, we can write rules that automatically govern the placement of Internet-facing elements like gateways and verify that the labeling is correct. As we’ve established earlier, DMZs provide us with the right set of attributes for creating a boundary. If we can label networks such that we can infer the existence and correctness of a DMZ within the network design, we’ve essentially validated that there is a network boundary between two networks that fulfills the deny-by-default and fail-secure principles, thereby proving compliance with SC7(b). A DMZ fundamentally divides the application architecture into three different trust-zones into which applications could be deployed. Suppose we can reason about the relationship between trust-zones and the placement of applications in trust-zones. In that case, we should be able to infer the correctness of a boundary. Our boundary design consists of three trust-zones. A private trust-zone is a set of subnets where compute elements are running the application. The edge trust-zone is a set of subnets that provide external connectivity, typically the Internet. The public trust-zone is everything else in the world. While the ideas and concepts are generic and will work for virtual machines and Kubernetes or other compute runtimes, we’ll focus on Kubernetes for the remainder of this article. Using the Kubernetes approach for taints and tolerations, we can control in which trust-zone workloads can be deployed. Since the edge trust-zone is critical for our boundary, we use an allow-list that defines what images can be placed in the edge trust-zone. The following set of rules (some would call them “controls”) encapsulate the approach we described above. R1[Tagging]: Each subnet must be labeled with exactly one of the following: ‘trust-zone:edge,’ ‘trust-zone:private’ R2[PublicGateway]: A public gateway may only be attached to subnets labeled ‘trust-zone:edge’ R3[Taint]: Cluster nodes running in a subnet labeled ‘trust-zone:edge’ must have a taint ‘trust-zone=edge:NoSchedule’ R4[Tolerance]: Only images that appear on the “edge-approved” allow list may tolerate ‘trust-zone=edge:NoSchedule’ If all rules pass, then we are guaranteed that application workloads will not be deployed to a subnet that has Internet access. It is important to note that to achieve our goal, all rules must pass. While in many compliance settings, passing all checks except for one is fine, in this situation, boundary protection will be guaranteed only if all four rules pass. Whenever we show people this scheme, the first question we get asked is: what happens if subnets are labeled incorrectly? Does it all crumble to the ground? The answer is: no! If you label subnets incorrectly, at least one rule will fail. Moreover, if all four rules pass then, we have also proven that subnets were labeled correctly. So let’s break down the logic and see how this works. Assuming that all rules have passed, let’s see why subnets are necessarily labeled correctly: Rule R1 passed, so we know that each subnet has only one label. You couldn’t have labeled a subnet with both “edge” and “private” or anything similar. Rule R2 passed, so we know that all subnets with Internet access were labeled “edge.” Rule R3 passed, so we know that all nodes in subnets with Internet access are tainted properly. Rule R4 passed, so we can conclude that private workloads cannot be deployed to subnets with Internet access. It is still possible that a subnet without Internet access was labeled “edge”, so it cannot be used for private workloads. This may be a performance issue but does not break the DMZ architecture. The above four rules set up a clearly defined boundary for our architecture. We can now add rules that enforce the protection of this boundary, requiring the placement of a proxy or firewall in the edge subnet and ensuring it is configured correctly. In addition, we can use tools like NP-Guard to ensure that the network is configured not to allow flows that bypass the proxy or open up more ports than what is strictly necessary. The edge trust-zone, however, needs broad access to the public trust-zone. This is due to constructs like content delivery networks that use anycast and advertise thousands of hostnames under a single set of IPs. Controlling access to the public trust-zone based on IPs is thus impractical, and we need to employ techniques like TLS Server Name Indicator (SNI) on a proxy to scope down access to the public trust-zone from other trust-zones. Of course, different organizations may implement their boundaries differently. For example, in many use cases, it is beneficial to define separate VPCs for each trust-zone. By modifying the rules above to label VPCs instead of subnets and checking the placement of gateways inside VPCs, we can create a set of rules for this architecture. To validate that our approach achieves the intended outcome, we’ve applied it to a set of VPC-based services. We added our rules to a policy-as-code framework that verifies compliance. We implemented the rules in Rego, the policy language used by the Open Policy Agent engine, and applied them to Terraform plan files of the infrastructure. We were able to recommend enhancements to the network layout that further improve the boundary isolation of the services. Going forward, these checks will be run on a regular basis as part of the CI/CD process to detect when changes in the infrastructure break the trust-zone boundaries.
Today's digital businesses are expected to innovate, execute, and release products at a lightning-fast pace. The widespread adoption of automation tools, when coupled with DevOps and DevSecOps tools, is instrumental to these businesses achieving increased developer velocity and faster feedback loops. This eventually helps in shortening release cycles and improving the product quality in an iterative manner. Though the shift to microservices and containerized applications and the adoption of open source are helping developers ship faster, they also pose challenges related to compliance and security. As per the Hidden In Plain Sight report from 1Password, DevOps and IT teams in enterprises continually face challenges posed by leakage of secrets, insecure sharing of secrets, and manual secrets management, amongst others. There are significant complexities involved in managing secrets like API keys, passwords, encryption keys, and so on for large-scale projects. Let’s take a deep dive into the integral aspects of secrets management in this article. What Is Secrets Management? In simple terms, secrets are non-human privileged credentials that give developers the provision to access resources in applications, containers, and so on. Akin to passwords management, secrets management is a practice whereby secrets (e.g., access tokens, passwords, API keys, and so on) are stored in a secure environment with tighter access controls. Managing secrets can become mayhem as the complexity and scale of an application grows over time. Additionally, there could be situations where secrets are being shared across different blocks across the technology stack. This could pose severe security threats, as it opens up the back doors for malicious actors to access your application. Secrets management ensures that sensitive information is never hard-coded and available only in an encrypted format. Secure access to sensitive data in conjunction with RBAC (role-based access controls) is the secret sauce of secrets management. Challenges of Secret Management There might be numerous cases where developers could have accidentally used hard-coded plain-text format credentials in their code or configuration files. The repercussions to the business could be huge if the respective files housing the secrets are pushed to the designated public repository on GitHub (or any other popular code hosting platforms). The benefits offered by multi-cloud infrastructures, containerized applications, IoT/IIoT, CI/CD, and similar advancements can be leveraged to the maximum extent by also focusing on efficient management of secrets. Educating development and DevOps teams about application security is the foremost step to build a security-first culture within the team. Here are the major challenges DevOps and DevSecOps teams face when managing secrets: Secrets Sprawl This scenario normally arises when the team’s (and/or organization’s) secrets are distributed across the organization. Digital-first organizations are increasingly using containers and cloud-based tools to increase developer velocity, save costs, and expedite releases. The same principle also applies for the development and testing of IoT-based applications. Depending on the scale and complexity of the applications, there is a high probability that the secrets are spread across: Containerized microservices-based applications (e.g., Kubernetes, OpenShift, Nomad) Automated E2E testing/tracing platforms (e.g., Prometheus, Graphite) Internally developed tools/processes Application servers and databases DevOps toolchain The items in the above list vary depending on the scale, size, and complexity of the application. Providing RBAC, using strong rotating passwords, and avoiding password sharing are some of the simple practices that must be followed at every level within the team/organization. Proliferation of Cloud Developer and Testing Tools Irrespective of the size and scale of the project, development teams look to maximize the usage of cloud development tools like GCP (Google Cloud Platform), Microsoft Azure, AWS (Amazon Web Services), Kubernetes, and more. Cloud tools definitely expedite processes related to development and testing, but they must be used while keeping security practices at the forefront. Any compromise of keys used for accessing the respective cloud platform (e.g., AWS keys) could lead to financial losses. AWS Credentials publicly exposed in a repository With so much at stake, DevOps and development teams must ensure that any sort of keys are not available in human-readable format in public domains (e.g., GitHub repositories). Organizations focusing on community-led growth (CLG) for evangelizing their product or developer tool need to ensure that their users do not leave any keys out in the open! If keys are left publicly accessible, hackers could exploit your platform for malicious reasons. Manual processes for managing secrets, data security when using third-party resources (e.g., APIs), and end-to-end visibility from the security lens are other challenges that organizations face with secrets management. Best Practices of Secret Management There is no one-size-fits-all approach in securely managing secrets, since a lot depends on the infrastructure, product requirements, and other such varying factors. Keeping variables aside, here are some of the best practices when it comes to efficient and scalable management of secrets: Use RBAC (Role-Based Access Control) Every project and organization has sensitive data and resources that must be accessible only by trusted users and applications. Any new user in the system must be assigned default privilege (i.e., minimum access control). Elevated privileges must be available only to a few members in the project or organization. The admin (or super-admin) must have the rights to add or revoke privileges of other members on a need basis. Escalation of privileges must also be done on a need basis, and only for a limited time. Proper notes must be added when giving/revoking privileges so that all the relevant project stakeholders have complete visibility. Use Secure Vaults In simple terms, a vault is a tool that is primarily used for securing any sensitive information (e.g., passwords, API keys, certificates, and so on). Local storage of secrets in a human readable form is one of the worst ways to manage secrets. This is where secure vaults can be extremely useful, as they provide a unified interface to any secret, along with providing a detailed audit log. Secure vaults can also be used for instrumenting role-based access control (RBAC) by specifying access privileges (authorization). Hashicorp Vault Helm chart and Vault for Docker are two of the popular vault managers that can be used for running vault services, accessing and storing secrets, and more. Since most applications leverage the potential of the cloud, it is important to focus on data security when it is in transit or at rest. This is where EaaS (Encryption as a Service) can be used for offloading encryption needs of applications to the vault before the data is stored at rest. Rotate Keys Regularly It is a good security practice to reset keys after a few weeks or months. One practice is to manually regenerate the keys, since there is a probability that applications using the secrets might be leaving behind traces in the log files or centralized logging systems. Attackers can get back-door access to the logs and use it to exfiltrate secrets. Additionally, co-workers might unintentionally leak secrets outside the organization. To avoid such situations, it is recommended to enable rotation of secrets in the respective secrets management tool. For instance, Secrets Manager rotation in AWS Secrets Manager uses an AWS Lambda function to update the secret and the database. Above all, teams should have practices in place to detect unauthorized access to the system. This will help in taking appropriate actions before significant damage can be done to the business. Why Implement Secrets Management in a DevSecOps Pipeline? Accelerated release cycles and faster developer feedback can only be achieved if the code is subjected to automated tests in a CI/CD pipeline. The tests being run in the CI pipeline might require access to critical protected resources like databases, HTTP servers, and so on. Even running unit tests inside Docker containers is also a common practice, but developers and QAs need to ensure that secrets are not stored inside a Dockerfile. Secret management tools can be used in conjunction with popular CI/CD tools (e.g., Jenkins) whereby keys and other secrets are managed in a centralized location. Secrets are also stored with encryption and tokenization.
Apostolos Giannakidis
Product Security,
Microsoft
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Boris Zaikin
Senior Software Cloud Architect,
Nordcloud GmBH
Anca Sailer
Distinguished Engineer,
IBM