DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

Security

The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.

icon
Latest Refcards and Trend Reports
Trend Report
Enterprise Application Security
Enterprise Application Security
Refcard #384
Advanced Cloud Security
Advanced Cloud Security
Refcard #387
Getting Started With CI/CD Pipeline Security
Getting Started With CI/CD Pipeline Security

DZone's Featured Security Resources

Keep Your Application Secrets Secret

Keep Your Application Secrets Secret

By Istvan Zoltan Nagy
There is a common problem most backend developers face at least once in their careers: where should we store our secrets? It appears to be simple enough, we have a lot of services focusing on this very issue, we just need to pick one and get on the next task. Sounds easy, but how can we pick the right solution for our needs? We should evaluate our options to see more clearly. The Test For the demonstration, we can take a simple Spring Boot application as an example. This will be perfect for us because that is one of the most popular technology choices on the backend today. In our example, we will assume we need to use a MySQL database over JDBC; therefore, our secrets will be the connection URL, driver class name, username, and password. This is only a proof of concept, any dependency would do as long as it uses secrets. We can easily generate such a project using Spring Initializr. We will get the DataSource auto configured and then create a bean that will do the connection test. The test can look like this: Java @Component public class MySqlConnectionCheck { private final DataSource dataSource; @Autowired public MySqlConnectionCheck(DataSource dataSource) { this.dataSource = dataSource; } public void verifyConnectivity() throws SQLException { try (final Connection connection = dataSource.getConnection()) { query(connection); } } private void query(Connection connection) throws SQLException { final String sql = "SELECT CONCAT(@@version_comment, ' - ', VERSION()) FROM DUAL"; try (final ResultSet resultSet = connection.prepareStatement(sql).executeQuery()) { resultSet.next(); final String value = resultSet.getString(1); //write something that will be visible on the Gradle output System.err.println(value); } } } This class will establish a connection to MySQL, and make sure we are, in fact, using MySQL as it will print the MySQL version comment and version. This way we would notice our mistake even if an auto configured H2 instance was used by the application. Furthermore, if we generate a random password for our MySQL Docker container, we can make sure we are using the instance we wanted, validating the whole configuration worked properly. Back to the problem, shall we? Storing Secrets The Easy Way The most trivial option is to store the secrets together with the code, either hard-coded or as a configuration property, using some profiles to be able to use separate environments (dev/test/staging/prod). As simple as it is, this is a horrible idea as many popular sites had to learn the hard way over the years. These “secrets” are anything but a secret. As soon as someone gets access to a repository, they will have the credentials to the production database. Adding insult to injury, we won’t even know about it! This is the most common cause of data breaches. A good indicator of the seriousness of the situation is to see how common secret scanning offerings got for example on GitHub, GitLab, Bitbucket, or others hosting git repositories. The Right Way Now that we see what the problem is, we can start to look for better options. There is one common thing we will notice in all the solutions we can use: they want us to store our secrets in an external service that will keep them secure. This comes with a lot of benefits these services can provide, such as: Solid access control. Encrypted secrets (and sometimes more, like certificates, keys). Auditable access logs. A way to revoke access/rotate secrets in case of a suspected breach. Natural separation of environments as they are part of the stack (one secrets manager per env). Sounds great, did we solve everything? Well, it is not that simple. We have some new questions we need to answer first: Who will host and maintain these? Where should we put the secrets we need for authentication when we want to access the secrets manager? How will we run our code locally on the developer laptops? How will we run our tests on CI? Will it cost anything? These are not trivial, and their answers depend very much on the solution we want to use. Let us review them one by one in the next section. Examples of Secrets Managers In all cases below, we will introduce the secrets manager as a new component of our stack, so if we had an application and a database, it would look like the following diagram. HashiCorp Vault If we go for the popular open-source option, HashiCorp Vault, then we can either self-host, or use their managed service, HCP Vault. Depending on the variant we select, we may or may not have some maintenance effort already, but it answers the first question. Answering the rest should be easy as well. Regarding the authentication piece, we can use, for example, the AppRole Auth Method using environment variables providing the necessary credentials to our application instances in each environment. Regarding the local and CI execution, we can simply configure and run a vault instance in dev server mode on the machine where the app should run and pass the necessary credentials using environment variables similarly to the live app instances. As these are local vaults, providing access to throw-away dev databases, we should not worry too much about their security as we should avoid storing meaningful data in them. To avoid spending a lot of effort on maintaining these local/CI vault instances, it can be a clever idea to store their contents in a central location, and let each developer update their vault using a single command every now and then. Regarding the cost, it depends on a few things. If you can go with the self-hosted open-source option, you should worry only about the VM cost (and the time spent on maintenance); otherwise, you might need to figure out how you can optimize the license/support cost. Cloud-Based Solutions If we are hosting our services using the services of one of the three big cloud providers, we have even more options. AWS, Azure, and Google Cloud are all offering a managed service for secrets managers. Probably because of the nature of the problem, AWS Secrets Manager, Azure Key Vault, and Google Cloud Secret Manager share many similarities. Please see a list below for examples: Stores versioned secrets. Logs access to the service and its contents. Uses solid authentication and authorization features. Well integrated with other managed services of the same provider Provides an SDK for developers of some popular languages At the same time, we should keep in mind that these are still hugely different services. Some of the obvious differences are the API they are using for communication, and the additional features they provide. For example, Azure Key Vault can store secrets, keys, and certificates, while AWS and GCP provide separate managed services for these additional features. Thinking about the questions we wanted to answer, they can answer the first two questions the same way. All of them are managed services, and the managed identity solution of the cloud provider they belong to is the most convenient, secure way to access them. Thanks to this, we do not need to bother storing secrets/tokens in our application configuration, just the URL of the secrets manager, which is not considered to be a secret. Regarding the cost, AWS and GCP can charge by the number of secrets and number of API calls. On the other hand, Azure only charges for the latter. In general, they are very reasonably priced, and we can sleep better at night knowing our security posture is a bit better. Trouble starts when we try to answer the remaining two questions dealing with the local and CI use-cases. All three solutions can be accessed from the outside world (given the proper network configuration), but simply punching holes on a firewall and sharing the same secrets manager credentials is not an ideal solution. There are situations when doing so is simply not practical, such as the following cases: Our team is scattered around the globe in the home office, and we would not be able to use strong IP restrictions, or we would need constant VPN connection just to build/test the code. Needing internet connection for tests is bad enough. But, using VPN constantly while at work can put additional stress on the infrastructure and team at the same time. When our CI instances are spawning with random IPs from an unknown range, we cannot set proper IP restrictions. A similar case to the previous. We cannot trust the whole team with the secrets of the shared secrets manager. For example, in the case of open-source projects, we cannot run around and share a secrets manager instance with the rest of the world. We need to change the contents of the secrets manager during the tests. When this happens, we are risking isolation problems between each developer and CI instance. We cannot launch a different secrets manager instance for each person and process (or test case) as that would not be very scalable. We do not want to pay extra for the additional secrets managers used in these cases. Can We Fake It Locally? Usually, this would be the moment when I start to search for a suitable test double and formulate plans about using that instead of the real service locally and on CI. What do we expect from such a test double? Behave like the real service would include in exceptional situations. Be actively maintained to reduce the risk of lagging behind in case of API version changes in the real service. Have a way to initialize the content of the secrets manager double on start-up to not need additional code in the application. Allow us to synchronize the secret values between the team and CI instances to reduce maintenance cost. Let us start and throw-away the test double simply, locally and on CI. Do not use a lot of resources. Do not introduce additional dependencies to our application if possible. I know about third-party solutions ticking all the boxes in case of AWS or Azure, while I have failed to locate one for GCP. Solving the Local Use Case for Each Secrets Manager in Practice It is finally time for us to roll up our sleeves and get our hands dirty. How should we modify our test project to be able to use our secrets manager integrations locally? Let us see for each of them: HashiCorp Vault Since we can run the real thing locally, getting a test double is pointless. We can simply integrate vault using the Spring Vault module by adding a property source: Java @Component("SecretPropertySource") @VaultPropertySource(value = "secret/datasource", propertyNamePrefix = "spring.datasource.") public class SecretPropertySource { private String url; private String username; private String password; private String driverClassName; // ... getters and setters ... } As well as a configuration for the “dev” profile: Java @Configuration @Profile("dev") public class DevClientConfig extends AbstractVaultConfiguration { @Override public VaultEndpoint vaultEndpoint() { final String uri = getEnvironment().getRequiredProperty("app.secrets.url"); return VaultEndpoint.from(URI.create(uri)); } @Override public ClientAuthentication clientAuthentication() { final String token = getEnvironment().getRequiredProperty("app.secrets.token"); return new TokenAuthentication(token); } @Override public VaultTemplate vaultTemplate() { final VaultTemplate vaultTemplate = super.vaultTemplate(); final SecretPropertySource datasourceProperties = new SecretPropertySource(); datasourceProperties.setUrl("jdbc:mysql://localhost:15306/"); datasourceProperties.setDriverClassName("com.mysql.cj.jdbc.Driver"); datasourceProperties.setUsername("root"); datasourceProperties.setPassword("16276ec1-a682-4022-b859-38797969abc6"); vaultTemplate.write("secret/datasource", datasourceProperties); return vaultTemplate; } } We need to be careful, as each bean—depending on the fetched secret values (or the DataSource)—must be marked with @DependsOn("SecretPropertySource") to make sure it will not be populated earlier during start-up while the vault backend PropertySource is not registered. As for the reason we used a “dev” specific profile, it was necessary because of two things: The additional initialization of the vault contents on start-up. The simplified authentication as we are using a simple token instead of the aforementioned AppRole. Performing the initialization here solves the worries about the maintenance of the vault contents as the code takes care of it, and we did not need any additional dependencies either. Of course, it would have been even better if we used some Docker magic to add those values without ever needing to touch Java. This might be an improvement for later. Speaking of Docker, the Docker Compose file is simple as seen below: YAML version: "3" services: vault: container_name: self-hosted-vault-example image: vault ports: - '18201:18201' restart: always cap_add: - IPC_LOCK entrypoint: vault server -dev-kv-v1 -config=/vault/config/vault.hcl volumes: - config-import:/vault/config:ro environment: VAULT_DEV_ROOT_TOKEN_ID: 00000000-0000-0000-0000-000000000000 VAULT_TOKEN: 00000000-0000-0000-0000-000000000000 # ... MySQL config ... volumes: config-import: driver: local driver_opts: type: "none" o: "bind" device: "vault" The key points to remember are the dev mode in the entry point, the volume config that will allow us to add the configuration file, and the environment variables baking in the dummy credentials we will use in the application. As for the configuration, we need to set in-memory mode and configure a HTTP endpoint without TLS: disable_mlock = true storage "inmem" {} listener "tcp" { address = "0.0.0.0:18201" tls_disable = 1 } ui = true max_lease_ttl = "7200h" default_lease_ttl = "7200h" api_addr = "http://127.0.0.1:18201" The complexity of the application might need some changes in the vault configuration or the Docker Compose content. However, for this simple example, we should be fine. Running the project, should produce the expected output: MySQL Community Server - GPL - 8.0.32 We are done with configuring vault for local use. Setting it up for tests should be even more simple using the things we have learned here. Also, we can simplify some of the steps there if we decide to use the relevant Testcontainers module. Google Cloud Secret Manager As there is no readily available test double for Google Cloud Secret Manager, we need to make a trade-off. We can decide what we would like to choose from the following three options: We can fall back to the easy option in case of the local/CI case, disabling the logic that will fetch the secrets for us in any real environment. In this case, we will not know whether the integration works until we deploy the application somewhere. We can decide to use some shared Secret Manager instances, or even let every developer create one for themselves. This can solve the problem locally, but it is inconvenient compared to the solution we wanted, and we would need to avoid running our CI tests in parallel and clean up perfectly in case the content of the Secret Manager must change on CI. We can try mocking/stubbing the necessary endpoints of the Secret Manager ourselves. WireMock can be a good start for the HTTP API, or we can even start from nothing. It is a worthy endeavor for sure, but will take a lot of time to do it well. Also, if we do this, we must consider the ongoing maintenance effort. As the decision will require quite different solutions for each, there is not much we can solve in general. AWS Secrets Manager Things are better in case of AWS, where LocalStack is a tried-and-true test double with many features. Chances are that if you are using other AWS managed services in your application, you will be using LocalStack already, making this even more appealing. Let us make some changes to our demo application to demonstrate how simple it is to implement the AWS Secrets Manager integration as well as using LocalStack locally. Fetching the Secrets First, we need a class that will know the names of the secrets in the Secrets Manager: Java @Configuration @ConfigurationProperties(prefix = "app.secrets.key.db") public class SecretAccessProperties { private String url; private String username; private String password; private String driver; // ... getters and setters ... } This will read the configuration and let us conveniently access the names of each secret by a simple method call. Next, we need to implement a class that will handle communication with the Secrets Manager: Java @Component("SecretPropertySource") public class SecretPropertySource extends EnumerablePropertySource<Map<String, String>> { private final AWSSecretsManager client; private final Map<String, String> mapping; private final Map<String, String> cache; @Autowired public SecretPropertySource(SecretAccessProperties properties, final AWSSecretsManager client, final ConfigurableEnvironment environment) { super("aws-secrets"); this.client = client; mapping = Map.of( "spring.datasource.driver-class-name", properties.getDriver(), "spring.datasource.url", properties.getUrl(), "spring.datasource.username", properties.getUsername(), "spring.datasource.password", properties.getPassword() ); environment.getPropertySources().addFirst(this); cache = new ConcurrentHashMap<>(); } @Override public String[] getPropertyNames() { return mapping.keySet() .toArray(new String[0]); } @Override public String getProperty(String property) { if (!Arrays.asList(getPropertyNames()).contains(property)) { return null; } final String key = mapping.get(property); //not using computeIfAbsent to avoid locking map while the value is resolved if (!cache.containsKey(key)) { cache.put(key, client .getSecretValue(new GetSecretValueRequest().withSecretId(key)) .getSecretString()); } return cache.get(key); } } This PropertySource implementation will know how each secret name can be translated to Spring Boot configuration properties used for the DataSource configuration, self-register as the first property source, and cache the result whenever a known property is fetched. We need to use the @DependsOn annotation same as in case of the vault example to make sure the properties are fetched in time. As we need to use basic authentication with LocalStack, we need to implement one more class, which will only run in the “dev” profile: Java @Configuration @Profile("dev") public class DevClientConfig { @Value("${app.secrets.url}") private String managerUrl; @Value("${app.secrets.accessKey}") private String managerAccessKey; @Value("${app.secrets.secretKey}") private String managerSecretKey; @Bean public AWSSecretsManager secretClient() { final EndpointConfiguration endpointConfiguration = new EndpointConfiguration(managerUrl, Regions.DEFAULT_REGION.getName()); final BasicAWSCredentials credentials = new BasicAWSCredentials(managerAccessKey, managerSecretKey); return AWSSecretsManagerClientBuilder.standard() .withEndpointConfiguration(endpointConfiguration) .withCredentials(new AWSStaticCredentialsProvider(credentials)) .build(); } } Our only goal with this service is to set up a suitable AWSSecretsManager bean just for local use. Setting Up the Test Double With the coding done, we need to make sure LocalStack will be started using Docker Compose whenever we start our Spring Boot app locally and stop it when we are done. Starting with the Docker Compose part, we need it to start LocalStack and make sure to use the built-in mechanism for running an initialization script when the container starts using the approach shared here. To do so, we need a script that can add the secrets: Shell #!/bin/bash echo "########### Creating profile ###########" aws configure set aws_access_key_id default_access_key --profile=localstack aws configure set aws_secret_access_key default_secret_key --profile=localstack aws configure set region us-west-2 --profile=localstack echo "########### Listing profile ###########" aws configure list --profile=localstack echo "########### Creating secrets ###########" aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-connection-url --secret-string "jdbc:mysql://localhost:13306/" --profile=localstack || echo "ERROR" aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-driver --secret-string "com.mysql.cj.jdbc.Driver" --profile=localstack || echo "ERROR" aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-username --secret-string "root" --profile=localstack || echo "ERROR" aws secretsmanager create-secret --endpoint-url=http://localhost:4566 --name database-password --secret-string "e8ce8764-dad6-41de-a2fc-ef905bda44fb" --profile=localstack || echo "ERROR" echo "########### Secrets created ###########" This will configure the bundled AWS CLI inside the container and perform the necessary HTTP calls to port 4566 where the container listens. To let LocalStack use our script, we will need to start our container with a volume attached. We can do so using the following Docker Compose configuration: YAML version: "3" services: localstack: container_name: aws-example-localstack image: localstack/localstack:latest ports: - "14566:4566" environment: LAMBDA_DOCKER_NETWORK: 'my-local-aws-network' LAMBDA_REMOTE_DOCKER: 0 SERVICES: 'secretsmanager' DEFAULT_REGION: 'us-west-2' volumes: - secrets-import:/docker-entrypoint-initaws.d:ro # ... MySQL config ... volumes: secrets-import: driver: local driver_opts: type: "none" o: "bind" device: "localstack" This will set up the volume, start LocalStack with the “secretsmanager” feature active, and allow us to map port 4566 from the container to port 14566 on the host so that our AWSSecretsManager can access it using the following configuration: Properties files app.secrets.url=http://localhost:14566 app.secrets.accessKey=none app.secrets.secretKey=none If we run the project, we will see the expected output: MySQL Community Server - GPL - 8.0.32 Well done, we have successfully configured our local environment. We can easily replicate these steps for the tests as well. We can even create multiple throw-away containers from our tests for example using Testcontainers. Azure Key Vault Implementing the Azure Key Vault solution will look like a cheap copy-paste job after the AWS Secrets Manager example we have just implemented above. Fetching the Secrets We have the same SecretAccessProperties class for the same reason. The only meaningful difference in SecretPropertySource is the fact that we are using the Azure SDK. The changed method will be this: Java @Override public String getProperty(String property) { if (!Arrays.asList(getPropertyNames()).contains(property)) { return null; } final String key = mapping.get(property); //not using computeIfAbsent to avoid locking map while the value is resolved if (!cache.containsKey(key)) { cache.put(key, client.getSecret(key).getValue()); } return cache.get(key); } The only missing piece is the “dev” specific client configuration that will create a dumb token and an Azure Key Vault SecretClient for us: Java @Configuration @Profile("dev") public class DevClientConfig { @Value("${app.secrets.url}") private String vaultUrl; @Value("${app.secrets.user}") private String vaultUser; @Value("${app.secrets.pass}") private String vaultPass; @Bean public SecretClient secretClient() { return new SecretClientBuilder() .credential(new BasicAuthenticationCredential(vaultUser, vaultPass)) .vaultUrl(vaultUrl) .disableChallengeResourceVerification() .buildClient(); } } With this, the Java side changes are completed, we can add the missing configuration and the application is ready: Properties files app.secrets.url=https://localhost:10443 app.secrets.user=dummy app.secrets.pass=dummy The file contents are self-explanatory, we have some dummy credentials for the simulated authentication and a URL for accessing the vault. Setting Up the Test Double Although setting up the test double will be like the LocalStack solution we implemented above, it will not be the same. We will use Lowkey Vault, a fake, that implements the API endpoints we need and more. As Lowkey Vault provides a way for us to import the vault contents using an attached volume, we can start by creating an import file containing the properties we will need: { "vaults": [ { "attributes": { "baseUri": "https://{{host}:{{port}", "recoveryLevel": "Recoverable+Purgeable", "recoverableDays": 90, "created": {{now 0}, "deleted": null }, "keys": { }, "secrets": { "database-connection-url": { "versions": [ { "vaultBaseUri": "https://{{host}:{{port}", "entityId": "database-connection-url", "entityVersion": "00000000000000000000000000000001", "attributes": { "enabled": true, "created": {{now 0}, "updated": {{now 0}, "recoveryLevel": "Recoverable+Purgeable", "recoverableDays": 90 }, "tags": {}, "managed": false, "value": "jdbc:mysql://localhost:23306/", "contentType": "text/plain" } ] }, "database-username": { "versions": [ { "vaultBaseUri": "https://{{host}:{{port}", "entityId": "database-username", "entityVersion": "00000000000000000000000000000001", "attributes": { "enabled": true, "created": {{now 0}, "updated": {{now 0}, "recoveryLevel": "Recoverable+Purgeable", "recoverableDays": 90 }, "tags": {}, "managed": false, "value": "root", "contentType": "text/plain" } ] }, "database-password": { "versions": [ { "vaultBaseUri": "https://{{host}:{{port}", "entityId": "database-password", "entityVersion": "00000000000000000000000000000001", "attributes": { "enabled": true, "created": {{now 0}, "updated": {{now 0}, "recoveryLevel": "Recoverable+Purgeable", "recoverableDays": 90 }, "tags": {}, "managed": false, "value": "5b8538b6-2bf1-4d38-94f0-308d4fbb757b", "contentType": "text/plain" } ] }, "database-driver": { "versions": [ { "vaultBaseUri": "https://{{host}:{{port}", "entityId": "database-driver", "entityVersion": "00000000000000000000000000000001", "attributes": { "enabled": true, "created": {{now 0}, "updated": {{now 0}, "recoveryLevel": "Recoverable+Purgeable", "recoverableDays": 90 }, "tags": {}, "managed": false, "value": "com.mysql.cj.jdbc.Driver", "contentType": "text/plain" } ] } } } ] } This is a Handlebars template that would allow us to use placeholders for the host name, port, and the created/updated/etc., timestamp fields. We must use the {{port} placeholder as we want to make sure we can use any port when we start our container, but the rest of the placeholders are optional, we could have just written a literal there. See the quick start documentation for more information. Starting the container has a similar complexity as in case of the AWS example: YAML version: "3" services: lowkey-vault: container_name: akv-example-lowkey-vault image: nagyesta/lowkey-vault:1.18.0 ports: - "10443:10443" volumes: - vault-import:/import/:ro environment: LOWKEY_ARGS: > --server.port=10443 --LOWKEY_VAULT_NAMES=- --LOWKEY_IMPORT_LOCATION=/import/keyvault.json.hbs # ... MySQL config ... volumes: vault-import: driver: local driver_opts: type: "none" o: "bind" device: "lowkey-vault/import" We need to notice almost the same things as before, the port number is set, the Handlebars template will use the server.port parameter and localhost by default, so the import should work once we have attached the volume using the same approach as before. The only remaining step we need to solve is configuring our application to trust the self-signed certificate of the test double, which is used for providing an HTTPS connection. This can be done by using the PKCS#12 store from the Lowkey Vault repository and telling Java that it should be trusted: Groovy bootRun { systemProperty("javax.net.ssl.trustStore", file("${projectDir}/local/local-certs.p12")) systemProperty("javax.net.ssl.trustStorePassword", "changeit") systemProperty("spring.profiles.active", "dev") dependsOn tasks.composeUp finalizedBy tasks.composeDown } Running the project will log the expected string as before: MySQL Community Server - GPL - 8.0.32 Congratulations, we can run our app without the real Azure Key Vault. Same as before, we can use Testcontainers for our test, but, in this case, the Lowkey Vault module is a third-party from the Lowkey Vault project home, so it is not in the list provided by the Testcontainers project. Summary We have established that keeping secrets in the repository defeats the purpose. Then, we have seen multiple solution options for the problem we have identified in the beginning, and can select the best secrets manager depending on our context. Also, we can tackle the local and CI use cases using the examples shown above. The full example projects can be found on GitHub here. More
How To Build an SBOM

How To Build an SBOM

By Gunter Rotsaert CORE
A Software Bill of Materials (SBOM) is getting more and more important in the software supply chain. In this blog, you will learn what an SBOM is and how to build the SBOM in an automated way. Enjoy! 1. Introduction An SBOM is a list of software components that makes up a software product. This way, it becomes transparent which software libraries, components, etc., and their versions are used in the software product. As a consequence, you will be able to react more adequately when a security vulnerability is reported. You only need to check the SBOMs for the vulnerable library, and you will know immediately which applications are impacted by the vulnerability. The SBOM of a library or application you want to use can also help you in your decision-making. It will become more common ground that software suppliers will be forced to deliver an up-to-date SBOM with each software delivery. Based on this information, you can make a risk assessment of whether you want to use the library or application. When you are a software supplier, you need to ensure that you deliver an SBOM with each software release. This actually means that you need to create the SBOM in an automated way, preferably in your build pipeline. As written before, the SBOM can help you to check whether a library used in your application contains security vulnerabilities. With the proper tooling, this check can be done in an automated way in your build pipeline. When security vulnerabilities are found, you can fail the build pipeline. One of these tools is grype, which can take an SBOM as input and check whether any components are used with known security vulnerabilities. In a previous post, it is explained how grype can be used. In the post, a Docker image is input to grype, but it is even better to create an SBOM first and then provide it to grype. How to create the SBOM will be explained in this post. When you start reading about SBOMs, you will notice that two standards are commonly used: CycloneDX: an open-source project originated within the OWASP community; SPDX (The Software Package Data Exchange): an international open standard format, also open source and hosted by the Linux foundation. So, which standard to use? This is a difficult question to answer. Generally, it is stated that both standards will continue to exist next to each other, and tools are advised to support both standards. SPDX is initially set up for license management, whereas CycloneDX had its primary focus on security. Reading several resources, the preferred format is CycloneDX when your focus is set on security. Interesting reads are SBOM formats SPDX and CycloneDX compared and the publication Using the Software Bill of Materials for Enhancing Cybersecurity from the National Cyber Security Centre of the Ministry of Justice and Security of the Netherlands. The latter is a must-read. In the remainder of this blog, you will learn how to use syft for building an SBOM. Syft is also a product of Anchore, just like grype is, and therefore integrates well with grype, the vulnerability scanning tool. Syft supports many ecosystems and has several export formats. It definitely supports CycloneDX and SPDX. CycloneDX also has tools for building SBOMs, but Syft is one tool that supports many ecosystems, which is an advantage compared to multiple tools. Sources being used in this post are available at GitHub. 2. Prerequisites The prerequisites needed for this blog are: Basic Linux knowledge; Basic Java knowledge; Basic JavaScript knowledge; Basic Spring Boot knowledge. 3. Application Under Test Before continuing, you need an application to build the SBOM. This is a basic application that consists of a Spring Boot backend and a Vue.js frontend. The application can be built with Maven and contains two Maven modules, one for the backend and one for the front end. More information about the setup can be read in a previous post. It is important, however, to build the application first. This can be done with the following command: Shell $ mvn clean verify 4. Installation Installation of syft can be done by executing the following script: Shell $ curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sudo sh -s -- -b /usr/local/bin Verify the installation by executing the following command: Shell $ syft --version syft 0.64.0 5. Build Backend SBOM Navigate to the backend directory and execute the following command: Shell $ syft dir:. --exclude ./**/sbom.*.json --output cyclonedx-json=sbom.cyclonedx.build-complete-dir.json The parameters will do the following: dir:.: Scan the entire directory in order to find dependencies; –exclude: Exclude already present SBOM files because you want to generate the SBOM file every time anew based on the current state of the repository; –output: Here, you define the output format to use, and you define the file name of the SBOM file. The SBOM file sbom.cyclonedx.build-complete-dir.json is created in the backend directory. Take a closer look at the SBOM format. JSON { "bomFormat": "CycloneDX", "specVersion": "1.4", "serialNumber": "urn:uuid:afbe7b48-b376-40fb-a0d4-6a16fda38a0f", "version": 1, "metadata": { "timestamp": "2023-01-14T16:35:35+01:00", "tools": [ { "vendor": "anchore", "name": "syft", "version": "0.64.0" } ], "component": { "bom-ref": "af63bd4c8601b7f1", "type": "file", "name": "." } }, "components": [ ... ] } The top part consists of metadata: the format of the SBOM, versions used, which tool is being used, etc. The component part consists of a list of all the components syft has found. The complete specification of CycloneDX can be found here. The component list is the following and corresponds to the list of libraries that can be found in the target/backend-0.0.1-SNAPSHOT.jar file. The libraries are located in the directory /BOOT-INF/lib/ in the jar file (the jar file is just a zip file and can be opened with any archive tool). Plain Text backend jackson-annotations jackson-core jackson-databind jackson-datatype-jdk8 jackson-datatype-jsr310 jackson-module-parameter-names jakarta.annotation-api jul-to-slf4j log4j-api log4j-to-slf4j logback-classic logback-core micrometer-commons micrometer-observation slf4j-api snakeyaml spring-aop spring-beans spring-boot spring-boot-autoconfigure spring-boot-jarmode-layertools spring-boot-starter-test spring-boot-starter-web spring-context spring-core spring-expression spring-jcl spring-web spring-webmvc tomcat-embed-core tomcat-embed-el tomcat-embed-websocket Now take a closer look at the jackson-annotations component in the SBOM file. In the properties section, you can see that this component has a property syft:package:foundBy with the value java-cataloger. This means that this component was found in the jar file. JSON { "bom-ref": "pkg:maven/com.fasterxml.jackson.core/jackson-annotations@2.14.1?package-id=9cdc3a1e17ebbb68", "type": "library", "group": "com.fasterxml.jackson.core", "name": "jackson-annotations", "version": "2.14.1", "cpe": "cpe:2.3:a:jackson-annotations:jackson-annotations:2.14.1:*:*:*:*:*:*:*", "purl": "pkg:maven/com.fasterxml.jackson.core/jackson-annotations@2.14.1", "externalReferences": [ { "url": "", "hashes": [ { "alg": "SHA-1", "content": "2a6ad504d591a7903ffdec76b5b7252819a2d162" } ], "type": "build-meta" } ], "properties": [ { "name": "syft:package:foundBy", "value": "java-cataloger" }, { "name": "syft:package:language", "value": "java" }, { "name": "syft:package:metadataType", "value": "JavaMetadata" }, { "name": "syft:package:type", "value": "java-archive" }, ... ] } When you take a look at component spring-boot-starter-web, it mentions that this component was found by java-pom-cataloger. This means that this component was found in the pom file. This is quite interesting because this would mean that syft cannot find transitive dependencies based on the sources only. Execute the following command where the target directory is excluded from the analysis. Shell $ syft dir:. --exclude ./**/sbom.*.json --exclude ./**/target --output cyclonedx-json=sbom.cyclonedx.build-sources.json The result can be found in the file sbom.cyclonedx.build-sources.json and the previously made assumption seems to be right. Only the spring-boot-starter-web and spring-boot-starter-test dependencies are found. This is, after all, not a big issue, but you have to be aware of this. 6. Build Frontend SBOM Navigate to the frontend directory and execute the following command: Shell $ syft dir:. --exclude ./**/sbom.*.json --output cyclonedx-json=sbom.cyclonedx.build-complete-dir.json This analysis takes a bit longer than the backend analysis, but after a few seconds, the sbom.cyclonedx.build-complete-dir.json file is created. Again, similar information can be found in the SBOM. The information is now available from the javascript-lock-cataloger. This means that it originates from the package-lock.json file. Another difference is that the components also contain license information. JSON "components": [ { "bom-ref": "pkg:npm/%40babel/parser@7.20.7?package-id=ca6a526d8a318088", "type": "library", "name": "@babel/parser", "version": "7.20.7", "licenses": [ { "license": { "id": "MIT" } } ], ... License information is included and can be used to check regarding allowed company policies. This information is, however, not yet available for Java packages. 7. Conclusion SBOMs will become more and more important in the software development lifecycle. More and more customers will demand an SBOM, and therefore it is important to automatically generate the SBOM. Syft can help you with that. Besides that, the SBOM can be fed to grype in order to perform a security vulnerability analysis. More
How To Test IoT Security
How To Test IoT Security
By Anna Smith
IAM Best Practices
IAM Best Practices
By Dwayne McDaniel
Security Architecture Review on a SASE Solution
Security Architecture Review on a SASE Solution
By Akanksha Pathak
Top 10 Resources for Learning Solidity
Top 10 Resources for Learning Solidity

If you want to become a smart contract developer on Ethereum, then you need to learn Solidity. Whether your goal is DeFi, blockchain gaming, digital collectibles (NFTs), or just web3 in general, Solidity is the foundational language behind the innovative projects on Ethereum. But where should you start? In this article, we’ll look at 10 great ways you can learn Solidity. Whether you're a beginner or an experienced web2 developer, this guide will help you not only get started with Solidity, but master it. We’ll look at all the best online courses, tutorials, documentation, and communities that can help you on your journey. First, however, if you’re new to web3, let’s provide some background on Solidity. Why Learn Solidity? Solidity is the language for writing smart contracts on the world’s most popular smart contract blockchain, Ethereum. And it’s not just for Ethereum. Multiple other blockchains, such as Avalanche and Binance Smart Chain, and L2s, such as Polygon and Optimism, are powered by Solidity. Learning Solidity not only opens up opportunities for you in Ethereum but also overall in the growing field of blockchain development. It’s a perfect skill to have for web3! So let’s look at 10 great ways to learn Solidity. #1 - ConsenSys 10-Minute Ethereum Orientation For a quick and thorough introduction (especially if you’re new to blockchain and Ethereum), check out the 10-Minute Ethereum Orientation by ConsenSys. This is the perfect starting point to orient yourself with the key terms of web3, the web3 tech stack, and how Solidity works into it all. ConsenSys is the company behind the most popular technologies of the web3 stack—MetaMask (the leading wallet), Infura (the leading web3 API), Truffle (dev and testing tools for Ethereum smart contracts), Diligence (blockchain security company) and more. Solidity can be confusing—but these are the tools that make it easy to use. ConsenSys is a well-known source for learning and we’ll get into that more in point six. #2 - CryptoZombies Once you complete your intro, check out CryptoZombies. This is the OG resource for learning Solidity. It’s a fun, interactive game that teaches you Solidity through building your own crypto-collectibles game. It’s an excellent starting point for beginners who are interested in developing decentralized applications and smart contracts on Ethereum. The course provides an easy-to-follow, step-by-step tutorial that guides you through the process of writing Solidity smart contracts. It even has gamification elements to keep you motivated. And it’s updated regularly. As Solidity adds features, they also add new learning materials. Modules on Oracles and Zero-knowledge Proofs (zk technology) are some recent additions to the curriculum. #3 - Speedrun Ethereum Next is Speedrun Ethereum, a series of gamified quests for learning Solidity. These quests cover topics such as NFTs, DeFi, tokens, and more—and it’s a lot of fun! It even covers more advanced Solidity concepts, including higher-order functions and inheritance. This course is great for intermediate-level Solidity learners who are familiar with the basics of Solidity. #4 - Solidity by Example Solidity by Example is a free resource that focuses on well-written code samples to teach Solidity. It provides a wide range of these examples, with each example explained in detail. This is less of a course and more of a resource for learning clean Solidity syntax. Highly recommended. Code samples vary from some simple to very advanced concepts. A good example—and one you can learn a lot from—is the full UniswapV2 contract. #5 - Dapp University If video is more your style, Dapp University is a Youtube channel with over 10 hours of hands-on tutorials. The tutorials are designed for both beginners and experienced Solidity developers. The tutorials cover topics such as setting up the development environment, writing Solidity smart contracts, and deploying them to the Ethereum blockchain. The content is well-structured and provides easy-to-follow instructions that guide you through the process of building your own decentralized applications. #6 - ConsenSys Academy By the same company mentioned in #1, ConsenSys Academy offers several online courses such as, Blockchain Essentials created to kick start your Solidity developer journey. They also offer the Blockchain Developer Program On-Demand course, where you’ll learn about the underpinnings of blockchain technology and how it all comes together to allow us to build the next generation of web applications. You’ll learn about the types of smart contract code, introduce you to key development tools, and show you all the best practices for smart contract development, all to prepare you for the final project towards the end of the course. Learners get hands-on experience with tools like Infura and Truffle Ganache, which are some of the most popular and widely used development tools in the Ethereum ecosystem that are focused on making Solidity easy to use. And as a product of ConsenSys, the Developer Program provides a direct link to the ConsenSys ecosystem, with access to some of the best resources and tools in the industry. #7 - Udemy Ethereum Blockchain Developer Bootcamp With Solidity This is another bootcamp, but in the form of an extensive Udemy course. It provides learners with up-to-date blockchain development tools, resources, and complete, usable projects to work on. The course is taught by an instructor who is a co-creator of the industry-standard Ethereum certification. This course is also updated frequently to reflect the latest changes in the ecosystem. #8 - Certified Solidity Developer Of course, there is always the certification path. Certified Solidity Developer is a certification offered by the Blockchain Council. It’s expensive, but it provides learners with a solid foundation in Solidity and smart contract development—and that piece of paper. The certification is well-recognized and is one of the most highly-rated blockchain developer accreditations. The course also provides learners with a deep understanding of smart contracts, their design patterns, and the various security implications of writing and deploying them on the Ethereum network. #9 - Official Solidity Documentation The Solidity documentation should not be underestimated. It’s an essential resource for those learning Solidity. It has the added value of always being up to date with the latest version of Solidity, and it contains detailed information about the Solidity programming language. The documentation is available in nine different languages, including Chinese, French, Indonesian, Japanese, and others. Undoubtedly, you’ll come back again and again to the Solidity documentation as you learn, so bookmark it now. #10 - Solidity Communities and Forums Finally, there are several solidity communities and forums that are excellent resources, such as CryptoDevHub and Solidity Forum. These communities are composed of Solidity experts, developers, and learners at all different levels. Ask questions, share knowledge, and collaborate on Solidity projects. By participating in these communities, you can keep up-to-date with the latest developments, gain insights into how other developers are approaching Solidity development, and make a few friends! Learning Solidity—Just Get Started! That’s a great start to your path. Learning Solidity is a valuable investment in your career. With these resources, you should be well on your way to learning Solidity, joining web3, and writing and deploying your first smart contracts. Of course, the fastest way to learn is to jump right in—so go for it! Have a really great day!

By John Vester CORE
What Is Policy-as-Code? An Introduction to Open Policy Agent
What Is Policy-as-Code? An Introduction to Open Policy Agent

In the cloud-native era, we often hear that "security is job zero," which means it's even more important than any number one priority. Modern infrastructure and methodologies bring us enormous benefits, but, at the same time, since there are more moving parts, there are more things to worry about: How do you control access to your infrastructure? Between services? Who can access what? Etc. There are many questions to be answered, including policies: a bunch of security rules, criteria, and conditions. Examples: Who can access this resource? Which subnet egress traffic is allowed from? Which clusters a workload must be deployed to? Which protocols are not allowed for reachable servers from the Internet? Which registry binaries can be downloaded from? Which OS capabilities can a container execute with? Which times of day can the system be accessed? All organizations have policies since they encode important knowledge about compliance with legal requirements, work within technical constraints, avoid repeating mistakes, etc. Since policies are so important today, let's dive deeper into how to best handle them in the cloud-native era. Why Policy-as-Code? Policies are based on written or unwritten rules that permeate an organization's culture. So, for example, there might be a written rule in our organizations explicitly saying: For servers accessible from the Internet on a public subnet, it's not a good practice to expose a port using the non-secure "HTTP" protocol. How do we enforce it? If we create infrastructure manually, a four-eye principle may help. But first, always have a second guy together when doing something critical. If we do Infrastructure as Code and create our infrastructure automatically with tools like Terraform, a code review could help. However, the traditional policy enforcement process has a few significant drawbacks: You can't be guaranteed this policy will never be broken. People can't be aware of all the policies at all times, and it's not practical to manually check against a list of policies. For code reviews, even senior engineers will not likely catch all potential issues every single time. Even though we've got the best teams in the world that can enforce policies with no exceptions, it's difficult, if possible, to scale. Modern organizations are more likely to be agile, which means many employees, services, and teams continue to grow. There is no way to physically staff a security team to protect all of those assets using traditional techniques. Policies could be (and will be) breached sooner or later because of human error. It's not a question of "if" but "when." And that's precisely why most organizations (if not all) do regular security checks and compliance reviews before a major release, for example. We violate policies first and then create ex post facto fixes. I know, this doesn't sound right. What's the proper way of managing and enforcing policies, then? You've probably already guessed the answer, and you are right. Read on. What Is Policy-as-Code (PaC)? As business, teams, and maturity progress, we'll want to shift from manual policy definition to something more manageable and repeatable at the enterprise scale. How do we do that? First, we can learn from successful experiments in managing systems at scale: Infrastructure-as-Code (IaC): treat the content that defines your environments and infrastructure as source code. DevOps: the combination of people, process, and automation to achieve "continuous everything," continuously delivering value to end users. Policy-as-Code (PaC) is born from these ideas. Policy as code uses code to define and manage policies, which are rules and conditions. Policies are defined, updated, shared, and enforced using code and leveraging Source Code Management (SCM) tools. By keeping policy definitions in source code control, whenever a change is made, it can be tested, validated, and then executed. The goal of PaC is not to detect policy violations but to prevent them. This leverages the DevOps automation capabilities instead of relying on manual processes, allowing teams to move more quickly and reducing the potential for mistakes due to human error. Policy-as-Code vs. Infrastructure-as-Code The "as code" movement isn't new anymore; it aims at "continuous everything." The concept of PaC may sound similar to Infrastructure as Code (IaC), but while IaC focuses on infrastructure and provisioning, PaC improves security operations, compliance management, data management, and beyond. PaC can be integrated with IaC to automatically enforce infrastructural policies. Now that we've got the PaC vs. IaC question sorted out, let's look at the tools for implementing PaC. Introduction to Open Policy Agent (OPA) The Open Policy Agent (OPA, pronounced "oh-pa") is a Cloud Native Computing Foundation incubating project. It is an open-source, general-purpose policy engine that aims to provide a common framework for applying policy-as-code to any domain. OPA provides a high-level declarative language (Rego, pronounced "ray-go," purpose-built for policies) that lets you specify policy as code. As a result, you can define, implement and enforce policies in microservices, Kubernetes, CI/CD pipelines, API gateways, and more. In short, OPA works in a way that decouples decision-making from policy enforcement. When a policy decision needs to be made, you query OPA with structured data (e.g., JSON) as input, then OPA returns the decision: Policy Decoupling OK, less talk, more work: show me the code. Simple Demo: Open Policy Agent Example Pre-requisite To get started, download an OPA binary for your platform from GitHub releases: On macOS (64-bit): curl -L -o opa https://openpolicyagent.org/downloads/v0.46.1/opa_darwin_amd64 chmod 755 ./opa Tested on M1 mac, works as well. Spec Let's start with a simple example to achieve an Access Based Access Control (ABAC) for a fictional Payroll microservice. The rule is simple: you can only access your salary information or your subordinates', not anyone else's. So, if you are bob, and john is your subordinate, then you can access the following: /getSalary/bob /getSalary/john But accessing /getSalary/alice as user bob would not be possible. Input Data and Rego File Let's say we have the structured input data (input.json file): { "user": "bob", "method": "GET", "path": ["getSalary", "bob"], "managers": { "bob": ["john"] } } And let's create a Rego file. Here we won't bother too much with the syntax of Rego, but the comments would give you a good understanding of what this piece of code does: File example.rego: package example default allow = false # default: not allow allow = true { # allow if: input.method == "GET" # method is GET input.path = ["getSalary", person] input.user == person # input user is the person } allow = true { # allow if: input.method == "GET" # method is GET input.path = ["getSalary", person] managers := input.managers[input.user][_] contains(managers, person) # input user is the person's manager } Run The following should evaluate to true: ./opa eval -i input.json -d example.rego "data.example" Changing the path in the input.json file to "path": ["getSalary", "john"], it still evaluates to true, since the second rule allows a manager to check their subordinates' salary. However, if we change the path in the input.json file to "path": ["getSalary", "alice"], it would evaluate to false. Here we go. Now we have a simple working solution of ABAC for microservices! Policy as Code Integrations The example above is very simple and only useful to grasp the basics of how OPA works. But OPA is much more powerful and can be integrated with many of today's mainstream tools and platforms, like: Kubernetes Envoy AWS CloudFormation Docker Terraform Kafka Ceph And more. To quickly demonstrate OPA's capabilities, here is an example of Terraform code defining an auto-scaling group and a server on AWS: With this Rego code, we can calculate a score based on the Terraform plan and return a decision according to the policy. It's super easy to automate the process: terraform plan -out tfplan to create the Terraform plan terraform show -json tfplan | jq > tfplan.json to convert the plan into JSON format opa exec --decision terraform/analysis/authz --bundle policy/ tfplan.json to get the result.

By Tiexin Guo
GKE Security: Top 10 Strategies for Securing Your Cluster
GKE Security: Top 10 Strategies for Securing Your Cluster

Security is one of the key challenges in Kubernetes because of its configuration complexity and vulnerability. Managed container services like Google Kubernetes Engine (GKE) provide many protection features but don’t take all related responsibilities off your plate. Read on to learn more about GKE security and best practices to secure your cluster. Basic Overview of GKE Security GKE protects your workload in many layers, which include your container image, its runtime, the cluster network, and access to the cluster API server. That’s why Google recommends a layered approach to protecting your clusters and workloads. Enabling the right level of flexibility and security for your organization to deploy and maintain workloads may require different tradeoffs, as some settings may be too constraining. The most critical aspects of GKE security involve the following: Authentication and authorization; Control plane security, including components and configuration; Node security; Network security. These elements are also reflected in CIS Benchmarks, which help to structure work around security configurations for Kubernetes. Why Are CIS Benchmarks Crucial for GKE Security? Handling K8s security configuration isn’t exactly a walk in the park. Red Hat 2022 State of Kubernetes and Container Security found that almost one in four serious issues were vulnerabilities that could be remediated. Nearly 70% of incidents happened due to misconfigurations. Since its release by the Center of Internet Security (CIS), Benchmarks have become globally recognized best practices for implementing and managing cybersecurity mechanisms. The CIS Kubernetes Benchmark involves recommendations for K8s configuration that support a strong security posture. Written for the open-source Kubernetes distribution, it intends to be as universally applicable as possible. CIS GKE Benchmarking in Practice With a managed service like GKE, not all items on the CIS Benchmark are under your control. That’s why there are recommendations that you cannot audit or modify directly on your own. These involve: The control plane; The Kubernetes distribution; The nodes’ operating system. However, you still have to take care of upgrading the nodes that run your workloads and, of course, the workloads themselves. You need to audit and remediate any recommendations to these components. You could do it manually or use a tool that handles CIS benchmarking. With CAST AI’s container security module, for example, you can get an overview of benchmark discrepancies within minutes of connecting your cluster. The platform also prioritizes the issues it identifies, so you know which items require remediation first. When scanning your cluster, you also check it against the industry’s best practices, so you can better assess your overall security posture and plan further GKE hardening. Top 10 strategies to Ensure GKE Security 1. Apply the Principle of Least Privilege This basic security tenet refers to granting a user account only the privileges that are essential to perform the intended function. It comes in CIS GKE Benchmark 6.2.1: Prefer not running GKE clusters using the Compute Engine default service account. By default, your nodes get access to the Compute Engine service account. Its broad access makes it useful to multiple applications, but it also has more permissions than necessary to run your GKE cluster. That’s why you must create and use a minimally privileged service account instead of the default one – and follow suit in other contexts, too. 2. Use RBAC to Strengthen Authentication and Authorization GKE supports multiple options for managing access to your clusters with role-based access control (RBAC). RBAC enables more granular access to Kubernetes resources at cluster and namespace levels, but it also lets you create detailed permission policies. CIS GKE Benchmark 6.8.4 underscores the need to give preference to RBAC over the legacy Attribute Based Access Control (ABAC). Another CIS GKE Benchmark (6.8.3) recommends using groups to manage users as it simplifies controlling identities and permissions. It also removes the need to update the RBAC configuration whenever users are added or removed from the group. 3. Enhance Your Control Plane’s Security Under the Shared Responsibility Model, Google manages the GKE control plane components for you. However, you remain responsible for securing your nodes, containers, and pods. By default, the Kubernetes API server uses a public IP address. You can protect it by using authorized networks and private clusters, which enable you to assign a private IP address. You can also enhance your control plane’s security by doing a regular credential rotation. When you initiate the process, the TLS certificates and cluster certificate authority are rotated automatically. 4. Upgrade Your GKE Infrastructure Regularly Kubernetes frequently releases new security features and patches, so keeping your K8s up-to-date is one of the simplest ways to improve your security posture. GKE patches and upgrades the control planes for you automatically. Node auto-upgrade also automatically upgrades nodes in your cluster. CIS GKE Benchmark 6.5.3 recommends keeping that setting on. If for any reason, you need to disable the auto-upgrade, Google advises performing upgrades monthly and following the GKE security bulletins for critical patches. 5. Protect Node Metadata CIS GKE Benchmarks 6.4.1 and 6.4.2 refer to two critical factors compromising your node security, which is still your responsibility. The v0.1 and v1beta1 Compute Engine metadata server endpoints were deprecated and shut down in 2020 as they didn’t enforce metadata query headers. Some attacks against Kubernetes rely on access to the VM’s metadata server to extract credentials. You can prevent those attacks with Workload identity or Metadata Concealment. 6. Disable the Kubernetes Dashboard Some years back, the world was electrified by the news of attackers gaining access to Tesla’s cloud resources and using them to mine cryptocurrency. The vector of attack, in that case, was a Kubernetes dashboard, which was exposed to the public with no authentication or elevated privileges. Complying with CIS GKE Benchmark 6.10.1 is recommended if you want to avoid following Tesla’s plight. This standard clearly outlines that you should disable Kubernetes web UI when running on GKE. By default, GKE 1.10 and later disable the K8s dashboard. You can also use the following code: gcloud container clusters update CLUSTER_NAME \ --update-addons=KubernetesDashboard=DISABLED 7. Follow the NSA-CISA Framework CIS Kubernetes Benchmark gives you a strong foundation for building a secure operating environment. But if you want to go further, make space for NSA-CISA Kubernetes Hardening Guidance in your security procedures. The NSA-CISA report outlines vulnerabilities within a Kubernetes ecosystem and recommends best practices for configuring your cluster for security. It presents recommendations on vulnerability scanning, identifying misconfigurations, log auditing, and authentication, helping you to ensure that you appropriately address common security challenges. 8. Improve Your Network Security Most workloads running in GKE need to communicate with other services running inside and outside the cluster. However, you can control the traffic allowed to flow through your clusters. First, you can use network policies to limit pod-to-pod communication. By default, all cluster pods can be reached over the network via their pod IP address. You can lock down the connection in a namespace by defining traffic flowing through your pods and stopping it for those that don’t match the configured labels. Second, you can balance your Kubernetes pods with a network load balancer. To do so, you create a LoadBalancer service matching your pod’s labels. You will have an external-facing IP mapping to ports on your Kubernetes Pods, and you’ll be able to filter authorized traffic at the node level with kube-proxy. 9. Secure Pod Access to Google Cloud Resources Your containers and pods might need access to other resources in Google Cloud. There are three ways to do this: with Workload Identity, Node Service Account, and Service Account JSON Key. The simplest and most secure option to access Google Cloud resources is by using Workload Identity. This method allows your pods running on GKE to get permissions on the Google Cloud service account. You should use application-specific Google Cloud service accounts to provide credentials so that applications have the minimal necessary permissions that you can revoke in case of a compromise. 10. Get a GKE-Configured Secret Manager CIS GKE Benchmark 6.3.1. recommends encrypting Kubernetes Secrets using keys managed in Cloud KMS. Google Kubernetes Engine gives you several options for secret management. You can use Kubernetes secrets natively in GKE, but you can also protect these at an application layer with a key you manage and application-layer secret encryption. There are also secrets managers like Hashicorp Vault, which provide a consistent, production-ready way to manage secrets in GKE. Make sure you check your options out and pick an optimal solution. Assess GKE Security Within Minutes The Kubernetes ecosystem keeps growing, but so are its security configuration challenges. If you want to stay on top of GKE container security, you need to be able to identify potential threats and track them efficiently. Kubernetes security reports let you scan your GKE cluster against CIS benchmark, NSA-CISA framework, and other container security best practices to identify vulnerabilities, spot misconfigurations, and prioritize them. It only takes a few minutes to get a complete overview of your cluster’s security posture.

By Olesia Pozdniakova
How To Scan a URL for Malicious Content and Threats in Java
How To Scan a URL for Malicious Content and Threats in Java

At this point, we’ve all heard the horror stories about clicking on malicious links, and if we’re unlucky enough, perhaps we’ve been the subject of one of those stories. Here’s one we’ll probably all recognize: an unsuspecting employee receives an email from a seemingly trustworthy source, and this email claims there’s been an attempt to breach one of their most important online accounts. The employee, feeling an immediate sense of dread, clicks on this link instinctively, hoping to salvage the situation before management becomes aware. When they follow this link, they’re confronted with a login interface they’re accustomed to seeing – or so they believe. Entering their email and password is second nature: they input this information rapidly and click “enter” without much thought. In their rush, this employee didn’t notice that the login interface looks very different than normal. Further, they’ve overlooked that the email address alerting them to this account “breach” contained 10 more characters than it would have if it had come from the account provider. On top of all that, they’ve failed to see that the link itself – a mix of tightly packed letters, symbols, and words which, in truth, they’ve hardly glanced at in the best of circumstances – contains improper spellings and characters all over the place. In about 30 seconds, this employee has unwittingly compromised an account with access to some of their employer’s most sensitive data, handing their login details to a cybercriminal far away who will, no doubt, waste little time in exploiting the situation for monetary gain. A boilerplate email phishing scenario such as this – the most basic example of a tried-and-true social engineering tactic, dating back to the early days of the internet – is just one of many threats involving URLs that continues to drive immense scrutiny around the origin and dissemination of malicious links. As the internet has scaled, the utility of URLs has grown in lockstep. We use URLs to share important content with our friends, colleagues, managers, clients, and customers all the time, quietly ensuring that URLs can continue to expand in their role as vehicles for social engineering scams, viruses, malware, and various other forms of cybersecurity threats. From this scrutiny, a culture of individual accountability has predominantly emerged: we, the targets of threatening URLs, are (justifiably) viewed as the most pivotal barrier between attack and breach. As a result, at an organizational level, the most important and common step taken to mitigate this issue involves training users on how to spot fraudulent links on their own. Employees of companies in diverse industries all over the world are increasingly taught to identify the obvious signs of malicious links (and social engineering/untrustworthy outreach), a practice which has, no doubt, proved highly beneficial in reducing instances of URL-driven breach. However, the vast criminal potential of URLs means user training isn’t quite enough to mitigate the issue entirely. To properly secure our invaluable data, we need to proactively implement security policies that can accurately identify and flag URL-based threats on their own. Like the tendencies of living viruses, the underlying strategies of URL threats (and all cybersecurity threats) inexorably evolve to defeat their victims, diminishing the utility of past security training until their relevance is dubious at best. For example, URLs are increasingly used as a lightweight method for sharing files across a network. When we receive a file link from someone we trust (regularly receive files from), we have little reason to believe that link may be compromised, and – despite all our intense security training – we are still very much in danger of clicking on it. Unbeknownst to us, this link may contain a malicious ForcedDownload file that seeks to capitalize on our brief error in judgment and compromise our system before we can react. While individual accountability means blunders such as this should (and will) be considered our fault in the short term, that blame has a limited ability to deter the issue as it continues to evolve. The person who sent this link to us may have received it from a source they usually trust, and that source may have received it from someone they also usually trust, and someone towards the beginning of that chain of communication may not have had any security training at their job whatsoever, blindly forwarding links from a source they believed to be valuable but had never actually investigated before. Just as it’s important for us to assume links such as this might be dangerous, it’s equally important for our system’s security policies to assume the same, and to act against those links as diligently as possible before they reach a human layer of discretion. To that end, URL security APIs can play a key role, offering an efficient, value-add service to our application architecture while removing some of the burdens on our users to prevent malicious links from compromising our systems by themselves. Demonstration The purpose of this article is to provide a powerful, free-to-use REST API that scans website URLs for various forms of threats. This API accepts a website link (beginning with "http://" or "https://") string as input and returns key information about the contents of that URL in short order. The response body includes the following information: “CleanResult” – A Boolean indicating whether or not the link is clean, ensuring this link can be diverted immediately from its intended destination“WebsiteThreatType,” a string value identifying if the underlying threat within the link is of the Malware, ForcedDownload, or Phishing variety (clean links will return “none”) “FoundViruses” – A subsection of viruses (“VirusName”) found within a given file URL (“FileName”), and the name of those viruses “WebsiteHttpResponseCode” – The three-digit HTTP response code returned by the link To complete a free API request, a free-tier API is required, and that can be obtained by registering a free account on the Cloudmersive website (please note, this yields a limit of 800 API calls per month with no commitments). To take advantage of this API, follow the steps below to structure your API call in Java using complementary, ready-to-run code examples. To begin, your first step is to install the Java SDK. To install with Maven, add the below reference to the repository in pom.xml: <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> To complete the installation with Maven, next add the following reference to the dependency in pom.xml: <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> To install with Gradle instead, add it to your root build.gradle at the end of repositories: allprojects { repositories { ... maven { url 'https://jitpack.io' } } } Following that, next, add the dependency in build.gradle, and you’re all done with the installation step: dependencies { implementation 'com.github.Cloudmersive:Cloudmersive.APIClient.Java:v4.25' } With installation out of the way, our next step is to add the imports and call the Virus Scan API: // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ScanApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ScanApi apiInstance = new ScanApi(); WebsiteScanRequest input = new WebsiteScanRequest(); // WebsiteScanRequest | try { WebsiteScanResult result = apiInstance.scanWebsite(input); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ScanApi#scanWebsite"); e.printStackTrace(); } After that, you’re all done – no more code is required.

By Brian O'Neill CORE
Get Up to Speed With the Latest Cybersecurity Standard for Consumer IoT
Get Up to Speed With the Latest Cybersecurity Standard for Consumer IoT

With growing concern regarding data privacy and data safety today, Internet of Things (IoT) manufacturers have to up their game if they want to maintain consumer trust. This is the shared goal of the latest cybersecurity standard from the European Telecommunications Standards Institute (ETSI). Known as ETSI EN 303 645, the standard for consumer devices seeks to ensure data safety and achieve widespread manufacturer compliance. So, let’s dive deeper into this standard as more devices enter the home and workplace. The ETSI Standard and Its Protections It counts a long name but heralds an important era of device protection. ETSI EN 303 645 is a standard and method by which a certifying authority can evaluate IoT device security. Developed as an internationally applicable standard, ETSI offers manufacturers a baseline for security rather than a comprehensive set of precise guidelines. The standard may also lay the groundwork for various future IoT cybersecurity certifications in different regions around the world. For example, look at what’s happening in the European Union. Last September, the European Commission introduced a proposed Cyber Resilience Act, intended to protect consumers and businesses from products with inadequate security features. If passed, the legislation — a world-first on connected devices — will bring mandatory cybersecurity requirements for products with digital elements throughout their whole lifecycle. The prohibition of default and weak passwords, guaranteed support of software updates and mandatory testing for security vulnerabilities are just some of the proposals. Interestingly, these same rules are included in the ETSI standard. IoT Needs a Cybersecurity Standard Shockingly, a single home filled with smart devices could experience as many as 12,000 cyber attacks in a single week. While most of those cyber attacks will fail, the sheer number means some inevitably get through. The ETSI standard strives to keep those attacks out with basic security measures, many of which should already be common sense, but unfortunately aren’t always in place today. For example, one of the basic requirements of the ETSI standard is no universal default passwords. In other words, your fitness tracker shouldn’t have the same default password as every other fitness tracker of that brand on the market. Your smart security camera shouldn’t have a default password that anyone who owns a similar camera could exploit. It seems like that would be common sense for IoT manufacturers, but there have been plenty of breaches that occurred simply because individuals didn’t know to change the default passwords on their devices. Another basic requirement of ETSI is allowing individuals to delete their own data. In other words, the user has control over the data a company stores about them. Again, this is pretty standard stuff in the privacy world, particularly in light of regulations like Europe’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA). However, this is not yet a universal requirement for IoT devices. Considering how much health- and fitness-related data many of these devices collect, consumer data privacy needs to be more of a priority. Several more rules in ETSI have to do with the software installed on such devices and how the provider manages security for the software. For example, there needs to be a system for reporting vulnerabilities. The provider needs to keep the software up to date and ensure software integrity. We would naturally expect these kinds of security measures for nearly any software we use, so the standard is basically just a minimum for data protection in IoT. Importantly, the ETSI standard covers pretty much everything that could be considered a smart device, including wearables, smart TVs and cameras, smart home assistants, smart appliances, and more. The standard also applies to connected gateways, hubs, and base stations. In other words, it covers the centralized access point for all of the various devices. Why Device Creators Should Implement the Standard Today Just how important is the security standard? Many companies are losing customers today due to a lack of consumer trust. There are so many stories of big companies like Google and Amazon failing to adequately protect user data, and IoT in particular has been in the crosshairs multiple times due to privacy concerns. An IoT manufacturer that doesn’t want to lose business, face fines and lawsuits, and damage the company's reputation should consider implementing the ETSI standard as a matter of course. After all, these days a given home might have as many as 16 connected devices, each an entry point into the home network. A company might have one laptop per employee but two, three, or more other smart devices per employee. And again, each smart device is a point of entry for malicious hackers. Without a comprehensive cybersecurity standard like ETSI EN 303 645, people who own unprotected IoT devices need to worry about identity theft, ransomware attacks, data loss and much more. How to Test and Certify Based on ETSI Certification is fairly basic and occurs in five steps: Manufacturers have to understand the 33 requirements and 35 recommendations of the ETSI standard and design devices accordingly. Manufacturers also have to buy an IoT platform that has been built with the ETSI standard in mind, since the standard will fundamentally influence the way the devices are produced and how they operate within the platform. Next, any IoT manufacturer trying to meet the ETSI standard has to fill out documents that provide information for device evaluation. The first document is the Implementation Conformance Statement, which shows which requirements and recommendations the IoT device does or doesn’t meet. The second is the Implementation eXtra Information for Testing, which provides design details for testing. A testing provider will next evaluate and test the product based on the two documents and give a report. The testing provider will provide a seal or other indication that the product is ETSI EN 303 645-compliant. With new regulations on the horizon, device manufacturers and developers should see it as best practice to get up to speed with this standard. Better cybersecurity is not only important for consumer protection but brand reputation. Moreover, this standard can provide a basis for stricter device security certifications and measures in the future. Prepare today for tomorrow.

By Carsten Rhod Gregersen
Application Mapping: 5 Key Benefits for Software Projects
Application Mapping: 5 Key Benefits for Software Projects

Application Dependency Mapping is the process of creating a graphical representation of the relationships and dependencies between different components of a software application. This includes dependencies between modules, libraries, services, and databases. It helps to understand the impact of changes in one component on other parts of the application and aids in troubleshooting, testing, and deployment. Software Dependency Risks Dependencies are often necessary for building complex software applications. However, development teams should be mindful of dependencies and seek to minimize their number and complexity for several reasons: Security vulnerabilities: Dependencies can introduce security threats and vulnerabilities into an application. Keeping track of and updating dependencies can be time-consuming and difficult. Compatibility issues: Dependencies can cause compatibility problems if their versions are not managed properly. Maintenance overhead: Maintaining a large number of dependencies can be a significant overhead for the development team, especially if they need to be updated frequently. Performance impact: Dependencies can slow down the performance of an application, especially if they are not optimized. Therefore, it's important for the development team to carefully map out applications and their dependencies, keep them up-to-date, and avoid using unnecessary dependencies. Application security testing can also help identify security vulnerabilities in dependencies and remediate them. Types of Software Dependencies Functional Functional dependencies are a type of software dependencies that are required for the proper functioning of a software application. These dependencies define the relationships between different components of the software and ensure that the components work together to deliver the desired functionality. For example, a software component may depend on a specific library to perform a specific task, such as connecting to a database, performing a calculation, or processing data. The library may provide a specific function or set of functions that the component needs to perform its task. If the library is unavailable or the wrong version, the component may not be able to perform its task correctly. Functional dependencies are important to consider when developing and deploying software because they can impact the functionality and usability of the software. It's important to understand the dependencies between different components of the software and to manage these dependencies effectively in order to ensure that the software works as expected. This can involve tracking the dependencies, managing version compatibility, and updating dependencies when necessary. Development and Testing Development and testing dependencies are software dependencies that are required during the development and testing phases of software development but are not required in the final deployed version. For example, a developer may use a testing library, such as JUnit or TestNG, to write automated tests for the software. This testing library is only required during development and testing but is not needed when the software is deployed. Similarly, a developer may use a build tool, such as Gradle or Maven, to manage the dependencies and build the software. This build tool is only required during development and testing but is not needed when the software is deployed. Development and testing dependencies are important to consider because they can impact the development and testing process and can add complexity to the software. It's important to understand and manage these dependencies effectively in order to ensure that the software can be developed, tested, and deployed effectively. This can involve tracking the dependencies, managing version compatibility, and updating dependencies when necessary. Additionally, it's important to ensure that development and testing dependencies are not included in the final deployed version of the software in order to minimize the size and complexity of the deployed software. Operational and Non-Functional Operational dependencies are dependencies that are required for the deployment and operation of the software. For example, an application may depend on a specific version of an operating system, a specific version of a web server, or a specific version of a database. These dependencies ensure that the software can be deployed and run in the desired environment. Non-functional dependencies, on the other hand, are dependencies that relate to the non-functional aspects of the software, such as performance, security, and scalability. For example, an application may depend on a specific version of a database in order to meet performance requirements or may depend on a specific security library in order to ensure that the application is secure. It's important to understand and manage both operational and non-functional dependencies effectively in order to ensure that the software can be deployed and run as expected. This can involve tracking the dependencies, managing version compatibility, and updating dependencies when necessary. Additionally, it's important to ensure that non-functional dependencies are configured correctly in order to meet the desired performance, security, and scalability requirements. 5 Benefits of Application Mapping for Software Projects Improved Understanding of the Project One of the primary benefits of application mapping is that it helps team members better understand the system as a whole. The visual representation of the relationships and interactions between different components can provide a clear picture of how the system operates, making it easier to identify areas for improvement or optimization. This can be especially useful for new team members, who can quickly get up to speed on the system without having to spend a lot of time reading through documentation or trying to decipher complex code. Facilitated Collaboration Another benefit of application mapping is that it can be used as a tool for communication and collaboration between different stakeholders involved in the software project. By providing a visual representation of the system, application mapping can help to foster a shared understanding between developers, business stakeholders, and other stakeholders, improving collaboration and reducing misunderstandings. Early Identification of Problems Application mapping can also help to identify potential issues early in the project before they become significant problems. By mapping out the relationships between different components, it is possible to identify areas where conflicts or dependencies could cause problems down the line. This allows teams to address these issues before they become major roadblocks, saving time and reducing the risk of delays in the project. Increased Efficiency Another benefit of application mapping is that it can help to optimize workflows and processes, reducing duplication and improving the efficiency of the overall system. By mapping out the flow of data and interactions between different components, it is possible to identify areas where processes can be streamlined or made more efficient, reducing waste and improving performance. Better Decision-Making Application mapping can be used to make informed decisions about future development and changes to the system. By allowing teams to understand the potential impact of changes to one part of the system on other parts, application mapping can help to reduce the risk of unintended consequences and ensure that changes are made with a full understanding of their impact on the overall system. This can help to improve the quality of the final product and reduce the risk of costly mistakes. Conclusion In conclusion, application mapping provides a clear and visual representation of the software architecture and the relationships between different components. This information can be used to improve understanding, facilitate collaboration, identify problems early, increase efficiency, and support better decision-making.

By Gilad David Maayan CORE
Protecting User Data in Microsoft 365: A Step-by-Step Guide
Protecting User Data in Microsoft 365: A Step-by-Step Guide

Introduction Microsoft 365 is a popular productivity suite used by organizations of all sizes. While it offers a wealth of features and benefits, it also poses security challenges, especially in terms of protecting user data. With cyber threats on the rise, it's more important than ever to ensure that your Microsoft 365 user accounts and data are secure. In this article, we'll provide a step-by-step guide to help you safeguard your Microsoft 365 environment against data loss. We'll cover the threat landscape, Microsoft 365 security features, best practices for securing user accounts, and data backup solutions for Microsoft 365. With the information and recommendations provided in this guide, you'll be well-equipped to protect your organization's valuable data and ensure business continuity. Understanding the Threat Landscape Data security is a critical issue for all organizations that use Microsoft 365. With the increasing sophistication of cyber threats, it's essential to be aware of the potential risks to your user accounts and data. The following are some of the common types of data loss that organizations face in a Microsoft 365 environment: Ransomware attacks: Ransomware is a type of malware that encrypts files and demands payment in exchange for the decryption key. This type of attack can be devastating, as it can lead to the permanent loss of data. Phishing attacks: Phishing attacks are designed to trick users into disclosing their login credentials or personal information. These attacks can be delivered through email, instant messaging, or malicious websites and can result in unauthorized access to user accounts and data. Insider threats: Insider threats can occur when a current or former employee with access to sensitive data deliberately or accidentally misuses that data. Data breaches: Data breaches can occur when unauthorized individuals gain access to sensitive data. This can be due to a lack of security measures or a security breach at a third-party provider. It's important to be aware of these threats and take proactive measures to protect your Microsoft 365 environment against data loss. In the next section, we'll discuss the security features that are available in Microsoft 365 to help you protect your data. Microsoft 365 Security Features Microsoft 365 offers a variety of security features to help protect user accounts and data. These features include: Multi-Factor Authentication (MFA): MFA is a security process that requires users to provide two or more authentication factors when accessing their accounts. This can include a password and a security code sent to their phone, for example. Enabling MFA helps to prevent unauthorized access to user accounts. Data Encryption: Microsoft 365 uses encryption to protect data both in transit and at rest. Data in transit is encrypted as it travels between users and Microsoft 365, while data at rest is encrypted on Microsoft's servers. Threat Protection: Microsoft 365 includes threat protection features, such as Advanced Threat Protection (ATP), that help to prevent malware and other threats from entering your environment. ATP uses artificial intelligence and machine learning to identify and block threats before they can cause damage. Compliance and Auditing: Microsoft 365 provides compliance and auditing features that help organizations meet regulatory requirements and monitor user activity. These features include audit logs, retention policies, and eDiscovery capabilities. By taking advantage of these security features, organizations can significantly reduce the risk of data loss in their Microsoft 365 environment. However, it's important to note that these features alone are not enough to fully protect user accounts and data. In the next section, we'll discuss best practices for securing user accounts in Microsoft 365. Best Practices for Securing User Accounts In addition to using the security features provided by Microsoft 365, there are several best practices that organizations can follow to help secure their user accounts and data: Use strong passwords: Encourage users to create strong, unique passwords and avoid using the same password for multiple accounts. Consider implementing password policies that enforce the use of strong passwords. Enable multi-factor authentication: Require all users to enable MFA on their accounts to help prevent unauthorized access. Restrict access to sensitive data: Use role-based access controls and other security measures to restrict access to sensitive data to only those users who need it. Keep software up to date: Regularly update all software, including Microsoft 365, to ensure that security vulnerabilities are patched. Educate users: Provide regular training to users on how to identify and avoid phishing attacks, as well as how to secure their accounts and devices. By following these best practices, organizations can help to minimize the risk of data loss in their Microsoft 365 environment. However, it's also important to have a backup plan in place in case of an unexpected disaster. In the next section, we'll discuss data backup solutions for Microsoft 365. Data Backup Solutions for Microsoft 365 Having a backup plan in place is an essential part of protecting against data loss in Microsoft 365. There are several data backup solutions available for Microsoft 365, including: Microsoft 365 Backup: Microsoft 365 Backup is a built-in backup solution for Microsoft 365 that provides backup and recovery for Exchange Online, SharePoint Online, and OneDrive for Business. This solution can be managed from the Microsoft 365 admin center and provides options for backing up data on a schedule, as well as for recovering data in the event of accidental deletion or data loss. Third-party backup solutions: There are also several third-party backup solutions available for Microsoft 365. These solutions offer advanced backup and recovery features, such as the ability to recover individual items, complete site collections, or entire SharePoint sites. Regardless of the solution you choose, it's important to regularly test your backup and recovery processes to ensure that you can quickly recover data in the event of a disaster. In conclusion, securing user accounts and data in Microsoft 365 requires a combination of security features, best practices, and backup solutions. By following the recommendations outlined in this article, organizations can significantly reduce the risk of data loss in their Microsoft 365 environment and ensure business continuity. Conclusion In today's digital world, securing user accounts and data is more important than ever. Microsoft 365 offers a range of security features, such as multi-factor authentication, data encryption, threat protection, and compliance and auditing, to help organizations protect their data. Additionally, following best practices such as using strong passwords, restricting access to sensitive data, and educating users can further enhance security. However, even with the best security measures in place, disasters can still occur. That's why it's important to have a backup plan in place. Microsoft 365 Backup and third-party backup solutions can help organizations recover data in the event of a disaster and ensure business continuity. In conclusion, protecting user accounts and data in Microsoft 365 requires a multi-layered approach that includes security features, best practices, and a backup plan. By following these recommendations, organizations can help to minimize the risk of data loss and ensure the protection of their critical data and user accounts.

By Alex Tray
Using JSON Web Encryption (JWE)
Using JSON Web Encryption (JWE)

In the previous article, we looked at signed JSON Web Tokens and how to use them for cross-service authorization. But sometimes, there are situations when you need to add sensitive information to a token that you would not want to share with other systems. Or such a token can be given to the user's device (browser, phone). In this case, the user can decode the token and get all the information from the payload. One solution to such a problem could be the use of JSON Web Encryption (JWE), the full specification of which can be found in RFC7516. JSON Web Encryption (JWE) JWE is an encrypted version of JWT and looks like this: It consists of the following parts separated by a dot: BASE64URL(UTF8(JWE Protected Header)) || '.' || BASE64URL(JWE Encrypted Key) || '.' || BASE64URL(JWE Initialization Vector) || '.' || BASE64URL(JWE Ciphertext) || '.' || BASE64URL(JWE Authentication Tag) JWE Protected Header For example: { "enc": "A256GCM", "alg": "RSA-OAEP-256" } where alg – The Content Encryption Key is encrypted to the recipient using the RSAES-OAEP algorithm to produce the JWE Encrypted Key. enc – Authenticated encryption is performed on the plaintext using the AES GCM algorithm with a 256-bit key to produce the ciphertext and the Authentication Tag. JWE Encrypted Key Encrypted Content Encryption Key value. JWE Initialization Vector Randomly generated value needed for the encryption process. JWE Ciphertext Encrypted payload. JWE Authentication Tag Computed during the encryption process and used to verify integrity. Token Generation There are many libraries for many programming languages to work with JWE tokens. Let's consider as an example the Nimbus library. build.gradle: implementation 'com.nimbusds:nimbus-jose-jwt:9.25.6' The payload can be represented as a set of claims: { "sub": "alice", "iss": "https://idp.example.org", "exp": 1669541629, "iat": 1669541029 } Let's generate a header: Java JWEHeader header = new JWEHeader( JWEAlgorithm.RSA_OAEP_256, EncryptionMethod.A256GCM ); which corresponds to the following JSON: { "enc": "A256GCM", "alg": "RSA-OAEP-256" } Let's generate an RSA key: Java RSAKey rsaJwk = new RSAKeyGenerator(2048) .generate(); Using the public part of the key, we can create an Encrypter object, with which we encrypt the JWT: Java RSAEncrypter encrypter = new RSAEncrypter(rsaJwk.toRSAPublicKey()); EncryptedJWT jwt = new EncryptedJWT(header, jwtClaims); String jweString = jwt.encrypt(encrypter); Execution result: eyJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiUlNBLU9BRVAtMjU2In0.O01BFr_XxGzKEUb_Z9vQOW3DX2cQFxojrRy2JyM5_nqKnrpAa0rvcPI_ViT2PdPRogBwjHGRDM2uNLd1BberKQlaZYuqPGXnpzDQjosF0tQlgdtY3uEZUMT-9WPP8jCxxQg0AGIm4abkp1cgzAWBQzm1QYL8fwaz16MS48ExRz41dLhA0aEWE4e7TYzjrfaK8M4wIUlQCFIl-wS1N3U8W2XeUc9MLYGmHft_Rd9KJs1c-9KKdUQf6tEzJ92TGEC7TRZX4hGdtszIq3GGGBQaW8P9jPozqaDdrikF18D0btRHNf3_57sR_CPEGYX0O4mY775CLWqB4Y1adNn-fZ0xoA.ln7IYZDF9TdBIK6i.ZhQ3Q5TY827KFQw8DdRRzQVJVFdIE03B6AxMNZ1sQIjlUB4QUxg-UYqjPJESPUmFsODeshGWLa5t4tUri5j6uC4mFDbkbemPmNKIQiY5m8yc.5KKhrggMRm7ydVRQKJaT0g To decode a JWE token, you need to create a Decryptor object and pass to it the private part of the key: Java EncryptedJWT jwt = EncryptedJWT.parse(jweString); RSADecrypter decrypter = new RSADecrypter(rsaJwk.toPrivateKey()); jwt.decrypt(decrypter); Payload payload = jwt.getPayload(); Here, we used an asymmetric encryption algorithm — the public part of the key is used for encryption, and the private part for decryption. This approach allows issuing JWE tokens for third-party services and being sure that the data will be protected (when there are intermediaries in the token transmission path). In this case, the final service needs to publish the public keys, which we will use to encrypt the content of the token. To decrypt the token the service will use the private part of the key, which it will keep secret. But what if we have the same Issuer and Consumer token service? It could be a backend that sets a cookie in the user's browser with sensitive information. In that case, you don't need to use an asymmetric algorithm — you can use a symmetric algorithm. In JWE terms, this is direct encryption. Java JWEHeader header = new JWEHeader(JWEAlgorithm.DIR, EncryptionMethod.A128CBC_HS256); which corresponds to the following JSON: { "enc": "A128CBC-HS256", "alg": "dir" } Let's generate a 256-bit key: Java KeyGenerator keyGen = KeyGenerator.getInstance("AES"); keyGen.init(256); SecretKey key = keyGen.generateKey(); and encrypt JWT: Java JWEObject jweObject = new JWEObject(header, jwtClaims.toPayload()); jweObject.encrypt(new DirectEncrypter(key)); String jweString = jweObject.serialize() Execution Result: eyJlbmMiOiJBMTI4Q0JDLUhTMjU2IiwiYWxnIjoiZGlyIn0..lyJ_pcHfp8cz13TVav8MZQ.LmeN4jHxYg-dEFZ98PlVfNXFI29L5NGanA6ncALWcI9uDqpoXaaBcKeOKuzRayfQ3X7yPTuiMRHAUHMR5K3Rucmb8fQw2dkP3EONUg0lbdbmfbNwDbjQcWCGUWXfBWFg.v63pTlB7B15ZLEwSBwBUAg Note that direct encryption does not include the JWE Encrypted Key part of the token. To decrypt the token, you need to create a Decryptor from the same key: Java EncryptedJWT jwt = EncryptedJWT.parse(jweString); jwt.decrypt(new DirectDecrypter(key)); Payload payload = jwt.getPayload(); Performance To evaluate the performance of symmetric and asymmetric encryption algorithms, a benchmark was conducted using the JHM library. RSA_OAEP_256/A256GCM was chosen as the asymmetric algorithm, A128CBC_HS256 as the symmetric algorithm. The tests were run on a Macbook Air M1. The payload: { "iss": "https://idp.example.org", "sub": "alice", "exp": 1669546229, "iat": 1669545629 } Benchmark results: Benchmark Mode Cnt Score Error Units Asymmetric Decrypt thrpt 4 1062,387 ± 4,990 ops/s Asymmetric Encrypt thrpt 4 17551,393 ± 388,733 ops/s Symmetric Decrypt thrpt 4 152900,578 ± 1251,034 ops/s Symmetric Encrypt thrpt 4 122104,824 ± 5102,629 ops/s Asymmetric Decrypt avgt 4 0,001 ± 0,001 s/op Asymmetric Encrypt avgt 4 ≈ 10⁻⁴ s/op Symmetric Decrypt avgt 4 ≈ 10⁻⁵ s/op Symmetric Encrypt avgt 4 ≈ 10⁻⁵ s/op As expected, asymmetric algorithms are slower. According to the test results, more than ten times slower. Thus, if possible, symmetric algorithms should be preferred to increase performance. Conclusion JWE tokens are quite a powerful tool and allow you to solve problems related to secure data transfer while taking all the benefits of self-contained tokens. At the same time, it is necessary to pay attention to performance issues and choose the most appropriate algorithm and key length.

By Viacheslav Shago
Compliance Automated Standard Solution (COMPASS), Part 5: A Lack of Network Boundaries Invites a Lack of Compliance
Compliance Automated Standard Solution (COMPASS), Part 5: A Lack of Network Boundaries Invites a Lack of Compliance

This post is part of a series dealing with Compliance Management. The previous post analyzed three approaches to Compliance and Policy Administration Centers. Two were tailored CPAC topologies that support specialized forms of policy. The third CPAC topology was for cloud environments and the attempt to accommodate the generic case of PVPs/PEPs with diverse native formats across heterogeneous cloud services and products. It is easy to see how these approaches can be used for configuration checks, but some controls require implementation that relies on higher-level concepts. In this article, we share our experience in authoring compliance policies that go deeper than configuration management. There are numerous tools for checking the compliance of cloud-native solutions, yet the problem is far from being solved. We know how to write rules to ensure that cloud infrastructure and services are configured correctly, but compliance goes deeper than configuration management. Building a correct network setup is arguably the most difficult aspect of building cloud solutions, and proving it to be compliant is even more challenging. One of the main challenges is that network compliance cannot be deduced by reasoning about the configuration of each element separately. Instead, to deduce compliance, we need to understand the relationships between various network resources and compute resources. In this blog post, we wanted to share a solution we developed to overcome these challenges because we believe this can be useful for anyone tackling the implementation of controls over network architectures. We are specifically interested in the problem of protecting boundaries for Kubernetes-based workloads running in a VPC, and we focus on the SC-7 control from the famous NIST 800-53. Boundaries are typically implemented using demilitarized zones (DMZs) that separate application workloads and the network they’re deployed in from the outside (typically the Internet) using a perimeter network that has very limited connectivity to both other networks. Because no connection can pass from either side without an active compute element (e.g., a proxy) in the perimeter network forwarding traffic, this construct inherently applies the deny by default principle and is guaranteed to fail secure in case the proxy is not available. Modern cloud-native platforms offer a broad range of software-defined networking constructs, like VPCs, Security Groups, Network ACLs, and subnets that could be used to build a DMZ. However, the guideline for compliance programs like FedRAMP is that only subnets are valid constructs for creating a boundary. Encapsulating the general idea of boundary protection is too difficult, as there are numerous ways to implement it and no way to automate enforcement and monitoring for violations. Instead, we have architectural guidelines that describe a particular method for building a compliant architecture using DMZs. This post shows how to automate compliance checking against a specific architectural design of DMZs. The core ideas should be easily transferable to other implementations of boundary protection. Our goal is to control access to public networks. The challenge is determining which parts of the architecture should have access to the external network. The solution is to have humans label network artifacts according to their intended use. Is this for sensitive workloads? Is this an edge device? Given the right set of labels, we can write rules that automatically govern the placement of Internet-facing elements like gateways and verify that the labeling is correct. As we’ve established earlier, DMZs provide us with the right set of attributes for creating a boundary. If we can label networks such that we can infer the existence and correctness of a DMZ within the network design, we’ve essentially validated that there is a network boundary between two networks that fulfills the deny-by-default and fail-secure principles, thereby proving compliance with SC7(b). A DMZ fundamentally divides the application architecture into three different trust-zones into which applications could be deployed. Suppose we can reason about the relationship between trust-zones and the placement of applications in trust-zones. In that case, we should be able to infer the correctness of a boundary. Our boundary design consists of three trust-zones. A private trust-zone is a set of subnets where compute elements are running the application. The edge trust-zone is a set of subnets that provide external connectivity, typically the Internet. The public trust-zone is everything else in the world. While the ideas and concepts are generic and will work for virtual machines and Kubernetes or other compute runtimes, we’ll focus on Kubernetes for the remainder of this article. Using the Kubernetes approach for taints and tolerations, we can control in which trust-zone workloads can be deployed. Since the edge trust-zone is critical for our boundary, we use an allow-list that defines what images can be placed in the edge trust-zone. The following set of rules (some would call them “controls”) encapsulate the approach we described above. R1[Tagging]: Each subnet must be labeled with exactly one of the following: ‘trust-zone:edge,’ ‘trust-zone:private’ R2[PublicGateway]: A public gateway may only be attached to subnets labeled ‘trust-zone:edge’ R3[Taint]: Cluster nodes running in a subnet labeled ‘trust-zone:edge’ must have a taint ‘trust-zone=edge:NoSchedule’ R4[Tolerance]: Only images that appear on the “edge-approved” allow list may tolerate ‘trust-zone=edge:NoSchedule’ If all rules pass, then we are guaranteed that application workloads will not be deployed to a subnet that has Internet access. It is important to note that to achieve our goal, all rules must pass. While in many compliance settings, passing all checks except for one is fine, in this situation, boundary protection will be guaranteed only if all four rules pass. Whenever we show people this scheme, the first question we get asked is: what happens if subnets are labeled incorrectly? Does it all crumble to the ground? The answer is: no! If you label subnets incorrectly, at least one rule will fail. Moreover, if all four rules pass then, we have also proven that subnets were labeled correctly. So let’s break down the logic and see how this works. Assuming that all rules have passed, let’s see why subnets are necessarily labeled correctly: Rule R1 passed, so we know that each subnet has only one label. You couldn’t have labeled a subnet with both “edge” and “private” or anything similar. Rule R2 passed, so we know that all subnets with Internet access were labeled “edge.” Rule R3 passed, so we know that all nodes in subnets with Internet access are tainted properly. Rule R4 passed, so we can conclude that private workloads cannot be deployed to subnets with Internet access. It is still possible that a subnet without Internet access was labeled “edge”, so it cannot be used for private workloads. This may be a performance issue but does not break the DMZ architecture. The above four rules set up a clearly defined boundary for our architecture. We can now add rules that enforce the protection of this boundary, requiring the placement of a proxy or firewall in the edge subnet and ensuring it is configured correctly. In addition, we can use tools like NP-Guard to ensure that the network is configured not to allow flows that bypass the proxy or open up more ports than what is strictly necessary. The edge trust-zone, however, needs broad access to the public trust-zone. This is due to constructs like content delivery networks that use anycast and advertise thousands of hostnames under a single set of IPs. Controlling access to the public trust-zone based on IPs is thus impractical, and we need to employ techniques like TLS Server Name Indicator (SNI) on a proxy to scope down access to the public trust-zone from other trust-zones. Of course, different organizations may implement their boundaries differently. For example, in many use cases, it is beneficial to define separate VPCs for each trust-zone. By modifying the rules above to label VPCs instead of subnets and checking the placement of gateways inside VPCs, we can create a set of rules for this architecture. To validate that our approach achieves the intended outcome, we’ve applied it to a set of VPC-based services. We added our rules to a policy-as-code framework that verifies compliance. We implemented the rules in Rego, the policy language used by the Open Policy Agent engine, and applied them to Terraform plan files of the infrastructure. We were able to recommend enhancements to the network layout that further improve the boundary isolation of the services. Going forward, these checks will be run on a regular basis as part of the CI/CD process to detect when changes in the infrastructure break the trust-zone boundaries.

By Daniel Pittner
Secrets Management
Secrets Management

Today's digital businesses are expected to innovate, execute, and release products at a lightning-fast pace. The widespread adoption of automation tools, when coupled with DevOps and DevSecOps tools, is instrumental to these businesses achieving increased developer velocity and faster feedback loops. This eventually helps in shortening release cycles and improving the product quality in an iterative manner. Though the shift to microservices and containerized applications and the adoption of open source are helping developers ship faster, they also pose challenges related to compliance and security. As per the Hidden In Plain Sight report from 1Password, DevOps and IT teams in enterprises continually face challenges posed by leakage of secrets, insecure sharing of secrets, and manual secrets management, amongst others. There are significant complexities involved in managing secrets like API keys, passwords, encryption keys, and so on for large-scale projects. Let’s take a deep dive into the integral aspects of secrets management in this article. What Is Secrets Management? In simple terms, secrets are non-human privileged credentials that give developers the provision to access resources in applications, containers, and so on. Akin to passwords management, secrets management is a practice whereby secrets (e.g., access tokens, passwords, API keys, and so on) are stored in a secure environment with tighter access controls. Managing secrets can become mayhem as the complexity and scale of an application grows over time. Additionally, there could be situations where secrets are being shared across different blocks across the technology stack. This could pose severe security threats, as it opens up the back doors for malicious actors to access your application. Secrets management ensures that sensitive information is never hard-coded and available only in an encrypted format. Secure access to sensitive data in conjunction with RBAC (role-based access controls) is the secret sauce of secrets management. Challenges of Secret Management There might be numerous cases where developers could have accidentally used hard-coded plain-text format credentials in their code or configuration files. The repercussions to the business could be huge if the respective files housing the secrets are pushed to the designated public repository on GitHub (or any other popular code hosting platforms). The benefits offered by multi-cloud infrastructures, containerized applications, IoT/IIoT, CI/CD, and similar advancements can be leveraged to the maximum extent by also focusing on efficient management of secrets. Educating development and DevOps teams about application security is the foremost step to build a security-first culture within the team. Here are the major challenges DevOps and DevSecOps teams face when managing secrets: Secrets Sprawl This scenario normally arises when the team’s (and/or organization’s) secrets are distributed across the organization. Digital-first organizations are increasingly using containers and cloud-based tools to increase developer velocity, save costs, and expedite releases. The same principle also applies for the development and testing of IoT-based applications. Depending on the scale and complexity of the applications, there is a high probability that the secrets are spread across: Containerized microservices-based applications (e.g., Kubernetes, OpenShift, Nomad) Automated E2E testing/tracing platforms (e.g., Prometheus, Graphite) Internally developed tools/processes Application servers and databases DevOps toolchain The items in the above list vary depending on the scale, size, and complexity of the application. Providing RBAC, using strong rotating passwords, and avoiding password sharing are some of the simple practices that must be followed at every level within the team/organization. Proliferation of Cloud Developer and Testing Tools Irrespective of the size and scale of the project, development teams look to maximize the usage of cloud development tools like GCP (Google Cloud Platform), Microsoft Azure, AWS (Amazon Web Services), Kubernetes, and more. Cloud tools definitely expedite processes related to development and testing, but they must be used while keeping security practices at the forefront. Any compromise of keys used for accessing the respective cloud platform (e.g., AWS keys) could lead to financial losses. AWS Credentials publicly exposed in a repository With so much at stake, DevOps and development teams must ensure that any sort of keys are not available in human-readable format in public domains (e.g., GitHub repositories). Organizations focusing on community-led growth (CLG) for evangelizing their product or developer tool need to ensure that their users do not leave any keys out in the open! If keys are left publicly accessible, hackers could exploit your platform for malicious reasons. Manual processes for managing secrets, data security when using third-party resources (e.g., APIs), and end-to-end visibility from the security lens are other challenges that organizations face with secrets management. Best Practices of Secret Management There is no one-size-fits-all approach in securely managing secrets, since a lot depends on the infrastructure, product requirements, and other such varying factors. Keeping variables aside, here are some of the best practices when it comes to efficient and scalable management of secrets: Use RBAC (Role-Based Access Control) Every project and organization has sensitive data and resources that must be accessible only by trusted users and applications. Any new user in the system must be assigned default privilege (i.e., minimum access control). Elevated privileges must be available only to a few members in the project or organization. The admin (or super-admin) must have the rights to add or revoke privileges of other members on a need basis. Escalation of privileges must also be done on a need basis, and only for a limited time. Proper notes must be added when giving/revoking privileges so that all the relevant project stakeholders have complete visibility. Use Secure Vaults In simple terms, a vault is a tool that is primarily used for securing any sensitive information (e.g., passwords, API keys, certificates, and so on). Local storage of secrets in a human readable form is one of the worst ways to manage secrets. This is where secure vaults can be extremely useful, as they provide a unified interface to any secret, along with providing a detailed audit log. Secure vaults can also be used for instrumenting role-based access control (RBAC) by specifying access privileges (authorization). Hashicorp Vault Helm chart and Vault for Docker are two of the popular vault managers that can be used for running vault services, accessing and storing secrets, and more. Since most applications leverage the potential of the cloud, it is important to focus on data security when it is in transit or at rest. This is where EaaS (Encryption as a Service) can be used for offloading encryption needs of applications to the vault before the data is stored at rest. Rotate Keys Regularly It is a good security practice to reset keys after a few weeks or months. One practice is to manually regenerate the keys, since there is a probability that applications using the secrets might be leaving behind traces in the log files or centralized logging systems. Attackers can get back-door access to the logs and use it to exfiltrate secrets. Additionally, co-workers might unintentionally leak secrets outside the organization. To avoid such situations, it is recommended to enable rotation of secrets in the respective secrets management tool. For instance, Secrets Manager rotation in AWS Secrets Manager uses an AWS Lambda function to update the secret and the database. Above all, teams should have practices in place to detect unauthorized access to the system. This will help in taking appropriate actions before significant damage can be done to the business. Why Implement Secrets Management in a DevSecOps Pipeline? Accelerated release cycles and faster developer feedback can only be achieved if the code is subjected to automated tests in a CI/CD pipeline. The tests being run in the CI pipeline might require access to critical protected resources like databases, HTTP servers, and so on. Even running unit tests inside Docker containers is also a common practice, but developers and QAs need to ensure that secrets are not stored inside a Dockerfile. Secret management tools can be used in conjunction with popular CI/CD tools (e.g., Jenkins) whereby keys and other secrets are managed in a centralized location. Secrets are also stored with encryption and tokenization.

By Niranjan Limbachiya CORE

Top Security Experts

expert thumbnail

Apostolos Giannakidis

Product Security,
Microsoft

‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎
expert thumbnail

Samir Behara

Senior Cloud Infrastructure Architect,
AWS

Samir Behara builds software solutions using cutting edge technologies. He is a Microsoft Data Platform MVP with over 15 years of IT experience. Samir is a frequent speaker at technical conferences and is the Co-Chapter Lead of the Steel City SQL Server UserGroup. He is the author of www.samirbehara.com
expert thumbnail

Boris Zaikin

Senior Software Cloud Architect,
Nordcloud GmBH

Certified Software and Cloud Architect Expert who is passionate about building solutions and architecture that solve complex problems and bring value to the business. He has solid experience designing and developing complex solutions based on the Azure, Google, AWS clouds. Boris has expertise in building distributed systems and frameworks based on Kubernetes, Azure Service Fabric, etc. His solutions successfully work in the following domains: Green Energy, Fintech, Aerospace, Mixed Reality. His areas of interest Enterprise Cloud Solutions, Edge Computing, High loaded Web API and Application, Multitenant Distributed Systems, Internet-of-Things Solutions.
expert thumbnail

Anca Sailer

Distinguished Engineer,
IBM

Dr. Anca Sailer is an IBM Distinguished Engineer at the T. J. Watson Research Center where she partners with clients, product providers, open communities to help transform their compliance processes into an engineering practice for automated continuous compliance and risk awareness. Dr. Sailer received her Ph.D. in Computer Science from Sorbonne Universités, France and applied her Ph.D. work to Bell Labs before joining IBM Research in 2003. Dr. Sailer holds over five dozen patents, has co-authored numerous publications in IEEE and ACM refereed journals and conferences, and co-edited three books on network and IT management topics. Her interests include hybrid cloud business and devops management, compliance digitization, and multiomics. She is a Senior Member of IEEE and an aspiring 46er.

The Latest Security Topics

article thumbnail
mTLS Everywere
Security in one's information system has always been among the most critical non-functional requirements. Here, learn more about Transport Layer Security.
March 24, 2023
by Nicolas Fränkel CORE
· 1,065 Views · 1 Like
article thumbnail
Solving the Kubernetes Security Puzzle
Cloud security can be daunting, but here are four practices you can implement today that will make your Kubernetes and cloud-native infrastructure more secure.
March 23, 2023
by Upkar Lidder
· 2,799 Views · 2 Likes
article thumbnail
Why Continuous Monitoring of AWS Logs Is Critical To Secure Customer and Business-Specific Data
In this article, we will discuss the current state of AWS log management, what changes are shaping their security value, and how teams can prepare for the future.
March 23, 2023
by Jack Naglieri
· 927 Views · 1 Like
article thumbnail
What Are the Benefits of Java Module With Example
Discover the advantages of using Java modules and how they are implemented with examples. Get a clear view of this key component in modern Java development.
March 23, 2023
by Janki Mehta
· 940 Views · 2 Likes
article thumbnail
What Are the Different Types of API Testing?
In this article, readers will learn about different types of API testing and why they are important to the software testing process. Read to learn more.
March 23, 2023
by Anna Smith
· 1,001 Views · 1 Like
article thumbnail
What Is Pen Testing?
Penetration testing is the process of testing a computer system, network, or web application to find vulnerabilities and weaknesses that hackers can exploit.
March 23, 2023
by Real Ahtisham
· 982 Views · 2 Likes
article thumbnail
The Role of Identity Detection and Response (IDR) in Safeguarding Government Networks
Let’s understand how IDR is swiftly changing the cybersecurity landscape for the public sector and why government agencies must gear up for its adoption.
March 23, 2023
by Deepak Gupta
· 1,305 Views · 1 Like
article thumbnail
Cachet 2.4: Code Execution via Laravel Configuration Injection
The Sonar R and D team analyzes vulnerabilities in Cachet and demonstrates the take over instances with basic user permissions using Laravel config files.
March 22, 2023
by Thomas Chauchefoin
· 1,139 Views · 1 Like
article thumbnail
OpenVPN With Radius and Multi-Factor Authentication
This tutorial provides a step-by-step guide to install an OpenVPN server with Radius and multi-factor authentication for additional security.
March 21, 2023
by Yves Debeer
· 2,612 Views · 2 Likes
article thumbnail
19 Most Common OpenSSL Commands for 2023
Leverage the power of OpenSSL through our comprehensive list of the most common commands. Easily understand what each command does and why it is important.
March 21, 2023
by Janki Mehta
· 2,406 Views · 2 Likes
article thumbnail
Public Key and Private Key Pairs: Know the Technical Difference
Read this comprehensive guide on Public and Private key Pair Cryptography and understand what and how they work in cryptography.
March 20, 2023
by Eden Allen
· 1,512 Views · 4 Likes
article thumbnail
A Guide to Understanding XDR Security Systems
XDR is the evolution of both endpoint detection and response (EDR) and network traffic analysis (NTA) solutions.
March 20, 2023
by Rahul Han
· 1,529 Views · 1 Like
article thumbnail
How Data Scientists Can Follow Quality Assurance Best Practices
Data scientists must follow quality assurance best practices in order to determine accurate findings and influence informed decisions.
March 19, 2023
by Devin Partida
· 2,665 Views · 1 Like
article thumbnail
Getting a Private SSL Certificate Free of Cost
This article will guide you on how to create wildcard certificates for your internal applications without paying an additional amount.
March 18, 2023
by sagar pawar
· 3,343 Views · 1 Like
article thumbnail
DeveloperWeek 2023: The Enterprise Community Sharing Security Best Practices
Here are some highlights from some of the DeveloperWeek 2023 sessions where industry leaders shared their security know-how.
March 17, 2023
by Dwayne McDaniel
· 2,706 Views · 1 Like
article thumbnail
AWS IP Address Management
Readers will learn about AWS IPAM, its advanced benefits, granular access controls, and how it can help improve security and optimize IP address utilization.
March 16, 2023
by Rahul Nagpure
· 2,625 Views · 1 Like
article thumbnail
Use After Free: An IoT Security Issue Modern Workplaces Encounter Unwittingly
Use After Free is one of the two major memory allocation-related threats affecting C code. It is preventable with the right solutions and security strategies.
March 16, 2023
by Joydeep Bhattacharya CORE
· 2,233 Views · 1 Like
article thumbnail
5 Common Firewall Misconfigurations and How to Address Them
Properly setting up your firewall can reduce the likelihood of data breaches. A few common firewall misconfigurations to watch for are mismatched authentication standards, open policy configurations, non-compliant services, unmonitored log output, and incorrect test systems data.
March 16, 2023
by Zac Amos
· 1,866 Views · 1 Like
article thumbnail
Container Security: Don't Let Your Guard Down
To comprehend the security implications of a containerized environment, it is crucial to understand the fundamental elements of a container deployment network.
March 16, 2023
by Akanksha Pathak
· 5,009 Views · 4 Likes
article thumbnail
How To Use Artificial Intelligence to Ensure Better Security
In this article, readers will learn how companies can leverage Artificial Intelligence (AI) to improve data security (cybersecurity) in these five ways.
March 14, 2023
by Chandra Shekhar
· 2,052 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: