The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
WebSocket vs. Server-Sent Events: Choosing the Best Real-Time Communication Protocol
Cybersecurity in the Cloud: Integrating Continuous Security Testing Within DevSecOps
Good Old History: Sessions Back in the old days, we used to secure web applications with sessions. The concept was straightforward: upon user authentication, the application would issue a session identifier, which the user would subsequently present in each subsequent call. On the backend side, the common approach was to have application memory storage to handle user authorization - simple mapping between session ID and user privileges. Unfortunately, the simple solution had scaling limitations. If we needed to scale an application server, we used to apply session stickiness on the exposed load balancer: Or, move session data to shared storage like a database. That caused other challenges to tackle: how to evenly distribute traffic for long living sessions and how to reduce request processing time for communication with shared session storage. Distributed Nature of Authorization The stateful nature of sessions becomes even more troublesome when we consider distributed applications. Handling proper session stickiness, and connection draining on a scale like multiple microservices gives no easy manageable solution. Stateless Authorization: JWT Luckily, we can use a stateless solution - JWT - which is based on a compact and self-contained, encoded JSON object acting as a replacement for session ID for client/server communication. The idea is to encode user privileges or roles into a token and sign data with a trusted issuer to prove token integrity. In this scenario, the user, once authenticated, gets an access token in response with all data required for authorization - no more server session storage needed. The server during authorization needs to decode the token and get user privileges from the token itself. Exposing Unprotected API in Kong To see how things can work, let’s use Kong, acting as an API Gateway for calling upstream service. For this demo, we will use the Kong Enterprise edition together with the OpenID Connect plugin handling JWT validation. But let's first expose some REST resources with Kong. To make the demo simple, we can expose a single/mock endpoint in Kong which will proxy requests to the httpbin.org service. Endpoint deployment can be done with a declarative approach: we define the configuration for setting up the Kong service that will call upstream. Then the decK tool will create respective resources in Kong Gateway. The configuration file is as follows: YAML _format_version: "3.0" _transform: true services: - host: httpbin.org name: example_service routes: - name: example_route paths: - /mock Once deployed, we can verify endpoint details in Kong Manager UI: For now, the endpoint is not protected and we can call it without any authorization details. Kong Gateway is exposed on a local machine on port 8000, so we can call it like this: ➜ ~ curl http://localhost:8000/mock/anything { "args": {}, "data": "", "files": {}, "form": {}, "headers": { "Accept": "*/*", "Host": "httpbin.org", "User-Agent": "curl/8.4.0", "X-Amzn-Trace-Id": "Root=1-65e2f62e-2ea7165246c573e24a3efeaf", "X-Forwarded-Host": "localhost", "X-Forwarded-Path": "/mock/anything", "X-Forwarded-Prefix": "/mock", "X-Kong-Request-Id": "3cef792ded0dfb53575cd866c20aba42" }, "json": null, "method": "GET", "url": "http://localhost/anything" } Securing API With OpenID Connect Plugin To secure our API we need two things: IdP server, which will issue JWT tokens Kong endpoint configuration that will validate JWT tokens Setting up an IdP server is out of scope for this blog post, but for the demo, we can use Keycloak. In my test setup, I created a “test” user which is granted to have a “custom-api-get” scope - we will use this scope name later on for authorization with Kong. To get a JWT token, we need to call the Keycloak token endpoint. It returns an encoded token, which we can decode on the jwt.io website: On the Kong side, we will define endpoint authorization with the OpenID Connect plugin. For this, again, we will use the decK tool to update the endpoint definition. YAML _format_version: "3.0" _transform: true services: - host: httpbin.org name: example_service routes: - name: example_route paths: - /mock plugins: - name: openid-connect enabled: true config: display_errors: true scopes_claim: - scope bearer_token_param_type: - header issuer: http://keycloak:8080/auth/realms/master/.well-known/openid-configuration scopes_required: - custom-api-get auth_methods: - bearer In the setup above, we stated that the user is allowed to call the endpoint if the JWT token contains the “custom-api-get” scope. We also specified how we want to pass the token (header value). To enable JWT signature verification, we also had to define the issuer. Kong will use this endpoint internally to get a list of public keys that can be used to check token integrity/signature (the content of that response is cached in Kong to avoid future requests). With this configuration, calling an endpoint without a token is not allowed. The plugin returns error details as follows: ➜ ~ curl http://localhost:8000/mock/anything {"message":"Unauthorized (no suitable authorization credentials were provided)"} To make it work, we need to pass a JWT token (for the sake of space, the token value is not presented): ➜ ~ curl http://localhost:8000/mock/anything --header "Authorization: Bearer $TOKEN" { "args": {}, "data": "", "files": {}, "form": {}, "headers": { "Accept": "*/*", "Authorization": "Bearer $TOKEN", "Host": "httpbin.org", "User-Agent": "curl/8.4.0", "X-Amzn-Trace-Id": "Root=1-65e30053-4f1b17b771c240463a878c41", "X-Forwarded-Host": "localhost", "X-Forwarded-Path": "/mock/anything", "X-Forwarded-Prefix": "/mock", "X-Kong-Request-Id": "c1cf555ab43d951f73f72a30d5546516" }, "json": null, "method": "GET", "url": "http://localhost/anything" } We should remember that tokens have a limited lifetime (in our demo, it was 1 minute), and the plugin verifies it as well. Calling the endpoint with an expired token returns the error: curl http://localhost:8000/mock/anything --header "Authorization: Bearer $TOKEN" {"message":"Unauthorized (invalid exp claim (1709375597) was specified for access token)"} Summary In this short post, we walked through the issues of session-based authorization and the benefits of stateless tokens, namely JWT. In a microservices solution, we can move authorization from microservice implementation into a centralized layer like Gateway. We just scratched the surface of JWT-based authorization, but we can implement more advanced scenarios by validating additional claims. If you’re interested in JWT details, I recommend you familiarize yourself with the specifications. Practice will make you an expert!
In the world of modern web development, security is paramount. With the rise of sophisticated cyber threats, developers need robust tools and frameworks to build secure applications. Deno, a secure runtime for JavaScript and TypeScript, has emerged as a promising solution for developers looking to enhance the security of their applications. Deno was created by Ryan Dahl, the original creator of Node.js, with a focus on addressing some of the security issues present in Node.js. Deno comes with several built-in security features that make it a compelling choice for developers concerned about application security. This guide will explore some of the key security features of Deno and how they can help you build trustworthy applications. Deno’s "Secure by Default" Features Deno achieves "Secure by Default" through several key design choices and built-in features: No file, network, or environment access by default: Unlike Node.js, which grants access to the file system, network, and environment variables by default, Deno restricts these permissions unless explicitly granted. This reduces the attack surface of applications running in Deno. Explicit permissions: Deno requires explicit permissions for accessing files, networks, and other resources, which are granted through command-line flags or configuration files. This helps developers understand and control the permissions their applications have. Built-in security features: Deno includes several built-in security features, such as a secure runtime environment (using V8 and Rust), automatic updates, and a dependency inspector to identify potentially unsafe dependencies. Secure standard library: Deno provides a secure standard library for common tasks, such as file I/O, networking, and cryptography, which is designed with security best practices in mind. Sandboxed execution: Deno uses V8's built-in sandboxing features to isolate the execution of JavaScript and TypeScript code, preventing it from accessing sensitive resources or interfering with other applications. No access to critical system resources: Deno does not have access to critical system resources, such as the registry (Windows) or keychain (macOS), further reducing the risk of security vulnerabilities. Overall, Deno's "Secure by Default" approach aims to provide developers with a safer environment for building applications, helping to mitigate common security risks associated with JavaScript and TypeScript development. Comparison of “Secure by Default” With Node.js Deno takes a more proactive approach to security by restricting access to resources by default and requiring explicit permissions for access. It also includes built-in security features and a secure standard library, making it more secure by default compared to Node.js. Feature Deno Node.js File access Denied by default, requires explicit permission Allowed by default Network access Denied by default, requires explicit permission Allowed by default Environment access Denied by default, requires explicit permission Allowed by default Permissions system Uses command-line flags or configuration files Requires setting environment variables Built-in security Includes built-in security features Lacks comprehensive built-in security Standard library Secure standard library Standard library with potential vulnerabilities Sandboxed execution Uses V8's sandboxing features Lacks built-in sandboxing features Access to resources Restricted access to critical system resources May have access to critical system resources Permission Model Deno's permission model is central to its "Secure by Default" approach. Here's how it works: No implicit permissions: In Deno, access to resources like the file system, network, and environment variables is denied by default. This means that even if a script tries to access these resources, it will be blocked unless the user explicitly grants permission. Explicit permission requests: When a Deno script attempts to access a resource that requires permission, such as reading a file or making a network request, Deno will throw an error indicating that permission is required. The script must then be run again with the appropriate command-line flag (--allow-read, --allow-net, etc.) to grant the necessary permission. Fine-grained permissions: Deno's permission system is designed to be fine-grained, allowing developers to grant specific permissions for different operations. For example, a script might be granted permission to read files but not write them, or to access a specific network address but not others. Scoped permissions: Permissions in Deno are scoped to the script's URL. This means that if a script is granted permission to access a resource, it can only access that specific resource and not others. This helps prevent scripts from accessing resources they shouldn't have access to. Permissions prompt: When a script requests permission for the first time, Deno will prompt the user to grant or deny permission. This helps ensure that the user is aware of the permissions being requested and can make an informed decision about whether to grant them. Overall, Deno's permission model is designed to give developers fine-grained control over the resources their scripts can access, while also ensuring that access is only granted when explicitly requested and authorized by the user. This helps prevent unauthorized access to sensitive resources and contributes to Deno's "Secure by Default" approach. Sandboxing Sandboxing in Deno helps achieve "secure by default" by isolating the execution of JavaScript and TypeScript code within a restricted environment. This isolation prevents code from accessing sensitive resources or interfering with other applications, enhancing the security of the runtime. Here's how sandboxing helps in Deno: Isolation: Sandboxing in Deno uses V8's built-in sandboxing features to create a secure environment for executing code. This isolation ensures that code running in Deno cannot access resources outside of its sandbox, such as the file system or network, without explicit permission. Prevention of malicious behavior: By isolating code in a sandbox, Deno can prevent malicious code from causing harm to the system or other applications. Even if a piece of code is compromised, it is limited in its ability to access sensitive resources or perform malicious actions. Enhanced security: Sandboxing helps enhance the overall security of Deno by reducing the attack surface available to potential attackers. It adds an additional layer of protection against common security vulnerabilities, such as arbitrary code execution or privilege escalation. Controlled access to resources: Sandboxing allows Deno to control access to resources by requiring explicit permissions for certain actions. This helps ensure that applications only access resources they are authorized to access, reducing the risk of unauthorized access. Overall, sandboxing plays a crucial role in Deno's "secure by default" approach by providing a secure environment for executing code and preventing malicious behavior. It helps enhance the security of applications running in Deno by limiting their access to resources and reducing the impact of potential security vulnerabilities. Secure Runtime APIs Deno's secure runtime APIs provide a robust foundation for building secure applications by default. With features such as sandboxed execution, explicit permission requests, and controlled access to critical resources, Deno ensures that applications run in a secure environment. Sandboxed execution isolates code, preventing it from accessing sensitive resources or interfering with other applications. Deno's permission model requires explicit permission requests for accessing resources like the file system, network, and environment variables, reducing the risk of unintended or malicious access. Additionally, Deno's secure runtime APIs do not have access to critical system resources, further enhancing security. Overall, Deno's secure runtime APIs help developers build secure applications from the ground up, making security a core part of the development process. Implement Secure Runtime APIs Implementing secure runtime APIs in Deno involves using Deno's built-in features and following best practices for secure coding. Here's how you can implement secure-by-default behavior in Deno with examples: Explicitly request permissions: Use Deno's permission model to explicitly request access to resources. For example, to read from a file, you would use the --allow-read flag: TypeScript const file = await Deno.open("example.txt"); // Read from the file... Deno.close(file.rid); Avoid insecure features: Instead of using Node.js-style child_process for executing shell commands, use Deno's Deno.run API, which is designed to be more secure: TypeScript const process = Deno.run({ cmd: ["echo", "Hello, Deno!"], }); await process.status(); Enable secure flag for import maps: When using import maps, ensure the secure flag is enabled to restrict imports to HTTPS URLs only: JSON { "imports": { "example": "https://example.com/module.ts" }, "secure": true } Use HTTPS for network requests: Always use HTTPS for network requests. Deno's fetch API supports HTTPS by default: TypeScript const response = await fetch("https://example.com/data.json"); const data = await response.json(); Update dependencies regularly: Use Deno's built-in security audits to identify and update dependencies with known vulnerabilities: Shell deno audit Enable secure runtime features: Take advantage of Deno's secure runtime features, such as automatic updates and dependency inspection, to enhance the security of your application. Implement secure coding practices: Follow secure coding practices, such as input validation and proper error handling, to minimize security risks in your code. Managing Dependencies To Reduce Security Risks To reduce security risks associated with dependencies, consider the following recommendations: Regularly update dependencies: Regularly update your dependencies to the latest versions, as newer versions often include security patches and bug fixes. Use tools like deno audit to identify and update dependencies with known vulnerabilities. Use semantic versioning: Follow semantic versioning (SemVer) for your dependencies and specify version ranges carefully in your deps.ts file to ensure that you receive bug fixes and security patches without breaking changes. Limit dependency scope: Only include dependencies that are necessary for your project's functionality. Avoid including unnecessary or unused dependencies, as they can introduce additional security risks. Use import maps: Use import maps to explicitly specify the mapping between module specifiers and URLs. This helps prevent the use of malicious or insecure dependencies by controlling which dependencies are used in your application. Check dependency health: Regularly check the health of your dependencies using tools like `deno doctor` or third-party services. Look for dependencies with known vulnerabilities or that are no longer actively maintained. Use dependency analysis tools: Use dependency analysis tools to identify and remove unused dependencies, as well as to detect and fix vulnerabilities in your dependencies. Review third-party code: When using third-party dependencies, review the source code and documentation to ensure that they meet your security standards. Consider using dependencies from reputable sources or well-known developers. Monitor for security vulnerabilities: Monitor security advisories and mailing lists for your dependencies to stay informed about potential security vulnerabilities. Consider using automated tools to monitor for vulnerabilities in your dependencies. Consider security frameworks: Consider using security frameworks and libraries that provide additional security features, such as input validation, authentication, and encryption, to enhance the security of your application. Implement secure coding practices: Follow secure coding practices to minimize security risks in your code, such as input validation, proper error handling, and using secure algorithms for cryptography. Secure Coding Best Practices Secure coding practices in Deno are similar to those in other programming languages but are adapted to Deno's unique features and security model. Here are some best practices for secure coding in Deno: Use explicit permissions: Always use explicit permissions when accessing resources like the file system, network, or environment variables. Use the --allow-read, --allow-write, --allow-net, and other flags to grant permissions only when necessary. Avoid using unsafe APIs: Deno provides secure alternatives to some Node.js APIs that are considered unsafe, such as the child_process module. Use Deno's secure APIs instead. Sanitize input: Always sanitize user input to prevent attacks like SQL injection, XSS, and command injection. Use libraries like std/encoding/html to encode HTML entities and prevent XSS attacks. Use HTTPS: Always use HTTPS for network communication to ensure data integrity and confidentiality. Deno's fetch API supports HTTPS by default. Validate dependencies: Regularly audit and update your dependencies to ensure they are secure. Use Deno's built-in audit tools to identify and mitigate vulnerabilities in your dependencies. Use secure standard library: Deno's standard library (std) provides secure implementations of common functionality. Use these modules instead of relying on third-party libraries with potential vulnerabilities. Avoid eval: Avoid using eval or similar functions, as they can introduce security vulnerabilities by executing arbitrary code. Use alternative approaches, such as functions and modules, to achieve the desired functionality. Minimize dependencies: Minimize the number of dependencies in your project to reduce the attack surface. Only include dependencies that are necessary for your application's functionality. Regularly update Deno: Keep Deno up to date with the latest security patches and updates to mitigate potential vulnerabilities in the runtime. Enable secure flags: When using import maps, enable the secure flag to restrict imports to HTTPS URLs only, preventing potential security risks associated with HTTP imports. Conclusion Deno's design philosophy, which emphasizes security and simplicity, makes it an ideal choice for developers looking to build secure applications. Deno's permission model and sandboxing features ensure that applications have access only to the resources they need, reducing the risk of unauthorized access and data breaches. Additionally, Deno's secure runtime APIs provide developers with tools to implement encryption, authentication, and other security measures effectively. By leveraging Deno's security features, developers can build applications that are not only secure but also reliable and trustworthy. Deno's emphasis on security from the ground up helps developers mitigate common security risks and build applications that users can trust. As we continue to rely more on digital technologies, the importance of building trustworthy applications cannot be overstated, and Deno provides developers with the tools they need to meet this challenge head-on.
In the age of digital transformation, businesses across the globe are increasingly relying on complex supply chain operations to streamline their processes, enhance productivity, and drive growth. However, as these supply chains become more interconnected and digitized, they also become more vulnerable to a myriad of cybersecurity threats. These threats can disrupt operations, compromise sensitive data, and ultimately, undermine business integrity and customer trust. The cybersecurity risks associated with supply chain operations are not just a concern for large corporations but also for small and medium-sized businesses. In fact, according to a report by the Ponemon Institute, 61% of U.S. companies experienced a data breach caused by a third-party vendor. This alarming statistic underscores the urgent need for businesses, developers, and cyber professionals to prioritize building resilient cybersecurity into their supply chain operations. This article aims to provide a comprehensive guide to understanding and addressing the unique cybersecurity challenges inherent in supply chain operations. By integrating cybersecurity measures into every facet of the supply chain, businesses can not only safeguard their operations and sensitive data but also gain a competitive edge in today's digital marketplace. We will explore the current state of supply chain cybersecurity, delve into the specific threats and challenges it presents, and present potential solutions and best practices. The goal is to equip businesses, developers, and cyber professionals with the knowledge and tools they need to fortify their supply chains against the ever-evolving landscape of cyber threats. Understanding Supply Chain Cybersecurity Supply chain cybersecurity is a critical aspect of risk management that focuses on protecting the supply chain from cyber threats. It involves securing all digital interactions and data exchanges that occur within the supply chain, from the initial sourcing of materials to the delivery of the final product to the customer. A supply chain is inherently complex, involving numerous entities such as suppliers, manufacturers, distributors, and retailers. Each of these entities represents a potential point of vulnerability that can be exploited by cybercriminals. Common types of cyber threats to supply chains include malware, phishing attacks, data breaches, and more sophisticated threats like Advanced Persistent Threats (APTs). One of the key challenges in supply chain cybersecurity is the interdependent nature of the supply chain. A single weak link in the chain can compromise the entire operation. For example, a cyberattack on a supplier could disrupt production, leading to delays, financial loss, and damage to the company's reputation. Moreover, the growing trend of digital transformation has led to an increase in the use of technologies such as Internet of Things (IoT) devices, cloud computing, and artificial intelligence in supply chain operations. While these technologies offer numerous benefits, they also increase the surface area for potential cyberattacks. Understanding the importance of supply chain cybersecurity and the unique threats it faces is the first step toward building a more secure and resilient supply chain. The next sections will delve deeper into the specific challenges of implementing cybersecurity in supply chain operations and discuss potential strategies and solutions. Challenges in Building Resilient Cybersecurity Into Supply Chain Operations Building resilient cybersecurity into supply chain operations presents a unique set of challenges due to the complex, interconnected nature of supply chains. These challenges can broadly be categorized into technical challenges, organizational challenges, and regulatory challenges. Technical Challenges The digital transformation of supply chains has led to the integration of various technologies such as IoT devices, cloud platforms, and AI-based systems. While these technologies have enhanced efficiency and productivity, they have also increased the complexity of the cybersecurity landscape. Ensuring the security of these diverse technologies, each with its own set of vulnerabilities, is a significant technical challenge. Organizational Challenges Supply chains involve multiple entities, including suppliers, manufacturers, distributors, and retailers. Each of these entities may have different cybersecurity protocols, making it difficult to implement consistent security measures across the entire supply chain. Additionally, there is often a lack of awareness and understanding of cybersecurity risks among these entities, particularly small and medium-sized businesses. Regulatory Challenges The regulatory environment for cybersecurity is rapidly evolving, with different countries and regions implementing their own set of rules and standards. Navigating this complex regulatory landscape and ensuring compliance can be a challenge, especially for global supply chains. Resource Constraints Many organizations, particularly small and medium-sized businesses, lack the resources necessary to implement robust cybersecurity measures. This includes financial resources, as well as human resources such as skilled cybersecurity professionals. Evolving Cyber Threats The nature of cyber threats is continually evolving, with cybercriminals employing increasingly sophisticated techniques. Keeping up with these threats and ensuring that cybersecurity measures are up-to-date is a constant challenge. Strategies for Building Resilient Cybersecurity Into Supply Chain Operations The rise of interconnected Internet of Things (IoT) devices and Industrial Control Systems (ICS) within supply chains has significantly expanded the attack surface for cyber adversaries. Vulnerabilities in software, hardware, or human behavior can be exploited to disrupt operations, steal intellectual property, or compromise critical infrastructure. To mitigate these risks and build resilient cybersecurity within supply chains, developers and security professionals must adopt a multi-layered, technically-focused approach. 1. Threat Intelligence Integration Proactive threat intelligence gathering and analysis are crucial in today's cyber landscape. Integrating threat intelligence feeds specific to the supply chain industry allows developers and security professionals to: Identify emerging threats: Identify and prioritize emerging threats before they are weaponized. This provides valuable time to develop patches, update security configurations, and implement mitigation strategies. Focus vulnerability assessments: Focus vulnerability assessments on the most relevant threats facing the supply chain. This ensures resources are allocated efficiently and critical vulnerabilities are addressed promptly. 2. Secure Coding Practices and SDLC Integration Building security into software from the outset is paramount. Here are key strategies for developers: Secure coding training: Implement mandatory secure coding training programs for developers. These programs should cover secure coding practices, common vulnerabilities, and coding standards specific to the supply chain industry. Static code analysis tools: Utilize static code analysis tools to identify potential vulnerabilities within code early in the development lifecycle. This allows for early remediation and reduces the risk of vulnerabilities being introduced into production systems. Secure Software Development Lifecycles (SDLCs): Integrate security considerations throughout the entire SDLC. This includes security requirements gathering, threat modeling, code reviews, and penetration testing to ensure the final product is secure and resilient. 3. Zero Trust Security Model Implementation Zero Trust security models assume no inherent trust within the network. This principle should be applied to all aspects of the supply chain: Least Privilege Access Control: Implement the principle of least privilege for all users, devices, and applications within the supply chain network. Grant access only to the minimum resources required for users to perform their designated tasks. Multi-Factor Authentication (MFA): Enforce strong authentication protocols, including multi-factor authentication (MFA), for all access attempts across the entire supply chain ecosystem. Continuous monitoring and microsegmentation: Implement continuous monitoring of network activity and system logs to detect suspicious behavior. Consider network segmentation and micro-segmentation strategies to limit the potential impact of a successful cyberattack. 4. Data Encryption in Transit and at Rest Data security is paramount within the supply chain. To ensure the confidentiality and integrity of sensitive data: Data encryption in transit: Encrypt all data in transit between systems and devices within the supply chain. This protects sensitive information from interception during network communication. Data encryption at rest: Encrypt all sensitive data at rest on storage devices and databases throughout the supply chain. This ensures that even if an attacker gains access to storage systems, the data will be unreadable. 5. Continuous Vulnerability Management Security vulnerabilities are constantly being discovered and exploited. A comprehensive vulnerability management program should be implemented: Vulnerability scanning and patch management: Regularly conduct vulnerability scans across all IT and ICS systems within the supply chain. Prioritize patching critical vulnerabilities identified during scans to minimize the window of exploitation. Penetration testing: Conduct regular penetration testing to identify exploitable weaknesses in security controls and configurations. This proactive approach simulates real-world attacks, helping to uncover vulnerabilities that may be missed by automated scans. 6. Secure Configuration Management Maintaining secure configurations of all systems across the supply chain is essential. This includes: Automated configuration management tools: Implement automated configuration management tools to ensure consistent and secure configurations across all devices and systems within the supply chain. Configuration baselines and change management: Establish security baselines for all system configurations and implement a robust change management process to track and review any modifications. 7. Security Awareness Training Human error is often a significant factor in successful cyberattacks. Ongoing security awareness training for all stakeholders within the supply chain is crucial: Educate employees on recognizing phishing scams and social engineering tactics commonly used by cybercriminals. Emphasize the importance of verifying sender legitimacy and avoiding suspicious links or attachments in emails. Secure coding practices: For developers, security awareness training should cover secure coding practices, common vulnerabilities in supply chain software, and the importance of secure coding throughout the SDLC. Supply chain-specific threats: Train all employees on the specific cyber threats relevant to the supply chain industry. This includes understanding the risks associated with IoT devices, ICS vulnerabilities, and data security best practices within the supply chain ecosystem. 8. Vendor Risk Management Building a secure supply chain requires extending security considerations beyond your organization's internal systems. Vendor Risk Management (VRM) is a critical practice for identifying and mitigating cybersecurity risks posed by third-party vendors throughout the supply chain ecosystem. VRM Best Practices Vendor assessment: Conduct thorough assessments of the cybersecurity posture of potential and existing vendors. This assessment should evaluate the vendor's: Security controls and incident response plans Patch management practices to ensure timely vulnerability remediation Data security measures like encryption and access controls Compliance with relevant security regulations (e.g., PCI DSS, HIPAA) Contractual security considerations: Integrate security expectations and accountability clauses within vendor contracts. This ensures clarity on: The vendor's responsibility for maintaining secure systems and data handling practices Reporting requirements for security incidents or vulnerabilities The right to conduct security audits of the vendor's systems Case Studies To illustrate the importance of building resilient cybersecurity into supply chain operations and how it can be achieved, let's consider two case studies: Case Study 1: Building Cybersecurity Resilience in a Global Pharmaceutical Supply Chain Company Acme Pharmaceuticals, a multinational pharmaceutical company with a complex global supply chain network Challenge Acme faced increasing concerns about cybersecurity threats targeting their supply chain. These threats included potential attacks on: Manufacturing facilities of third-party vendors Logistics and transportation systems used to deliver critical materials and finished products Intellectual property theft of proprietary drug formulas Strategies Implemented Vendor Risk Management: Acme implemented a rigorous VRM program. They assessed the cybersecurity posture of all major vendors, including raw material suppliers, contract manufacturers, and logistics providers. Security controls, data security practices, and incident response plans were evaluated. Contracts were updated to include security expectations and reporting requirements for vulnerabilities or breaches. Threat intelligence integration: Acme subscribed to a threat intelligence feed specializing in the pharmaceutical industry. This feed provided insights into emerging cyber threats targeting the healthcare sector. The intelligence was used to prioritize vendor assessments and identify potential weaknesses in their own security posture. Secure coding practices: Acme partnered with key vendors to promote secure coding practices within their software development lifecycles. This included training for vendor developers on secure coding principles and code review processes to identify and eliminate vulnerabilities. Data encryption in transit and at rest: Acme implemented data encryption for all sensitive data throughout the supply chain. This included encrypting data during transportation between facilities and at rest on storage devices and databases. Continuous monitoring and microsegmentation: Acme implemented continuous monitoring of their network and vendor systems. Network segmentation and micro-segmentation strategies were employed to limit the potential impact of a successful cyberattack. Results By implementing these strategies, Acme significantly improved the cybersecurity resilience of their supply chain. Vendor assessments identified and mitigated potential security risks. Threat intelligence provided early warnings of emerging threats. Secure coding practices within the vendor network reduced the likelihood of software vulnerabilities. Data encryption protected sensitive information, and continuous monitoring allowed for the rapid detection and response to suspicious activity. Case Study 2: Security of a Just-In-Time (JIT) Supply Chain for a Tech Startup Company NovaTech, a fast-growing tech startup that relies on a Just-in-Time (JIT) inventory management system for their electronics manufacturing Challenge NovaTech's JIT system minimized inventory storage costs but also increased reliance on a network of interconnected suppliers and manufacturers. This complex ecosystem presented a larger attack surface for potential cyberattacks. Security concerns included: Disruptions to production caused by cyberattacks on supplier IT systems Theft of intellectual property related to NovaTech's hardware designs Ransomware attacks on critical manufacturing equipment within the supply chain Strategies Implemented Zero Trust security model: NovaTech implemented a Zero Trust security model across their entire supply chain. This model assumed no inherent trust within the network and required continuous verification for all users, devices, and applications attempting to access resources. Secure configuration management: Automated configuration management tools were implemented to ensure consistent and secure configurations across all devices and systems within the supply chain. This included routers, switches, and manufacturing equipment used by NovaTech and their vendors. Security awareness training: NovaTech conducted comprehensive security awareness training programs for their employees and partnered with vendors to offer similar training for their workforce. This training emphasized best practices for secure password management, phishing email identification, and reporting suspicious activity. Penetration testing: NovaTech conducted regular penetration testing of their own systems and, when possible, collaborated with key vendors to conduct penetration testing of their critical infrastructure. This proactive approach helped identify and address potential vulnerabilities before they could be exploited by cybercriminals. Cybersecurity incident response plan: A comprehensive incident response plan was developed and tested to ensure a coordinated and rapid response in the event of a cyberattack. The plan outlined roles and responsibilities for NovaTech and their vendors during a security incident. Results NovaTech's commitment to cybersecurity throughout their JIT supply chain significantly reduced their risk of cyberattacks. The Zero Trust model ensured that only authorized users and devices could access critical resources. Secure configuration management minimized the risk of misconfigured systems creating vulnerabilities. Security awareness training empowered employees and vendors to identify and report suspicious activity. Penetration testing identified and addressed potential weaknesses in security posture. A well-defined incident response plan ensured a swift and coordinated response to security incidents. Conclusion In conclusion, building a resilient cybersecurity system within the supply chain is a continuous, collaborative effort that involves all stakeholders. The importance of proactive threat intelligence gathering and analysis cannot be overstated, as it provides crucial insights for prioritizing security measures. Additionally, extending security considerations to include Vendor Risk Management (VRM) and adopting a Zero Trust security model are key strategies for defending against evolving cyber threats, particularly in complex and interconnected systems like Just-In-Time (JIT) supply chains. Secure configuration management also plays a vital role in maintaining a consistent security posture. Ultimately, the commitment to continuous monitoring, layered security, and active participation from all stakeholders is what will safeguard an organization's operations, data, and reputation in the digital marketplace.
The rise in cybercrime, coupled with the pressing need for fresh products and the push to speed up development, is making the adoption of DevSecOps essential. Industry analysts note that about 77% of development teams are already on board with this approach. Nowadays, an increasing number of businesses are opting for Application Security Orchestration and Correlation (ASOC) within DevSecOps frameworks to ensure secure software development. ASOC-Type DevSecOps Systems DevSecOps stands out from traditional development methods by weaving security into every phase of software creation right from the start. There are many ways to adopt DevSecOps. For those looking to avoid complicated setups, the market offers ASOC-based solutions. These solutions can help companies save time, money, and labor resources while also reducing the time to market for their products. ASOC platforms enhance the effectiveness of security testing and maintain the security of software in development without delaying delivery. Gartner's Hype Cycle for Application Security, 2021, indicated that the market penetration of these solutions ranged from 5 to 20% among the intended clients. The practical uptake of this technology is low primarily because of limited awareness about its availability and benefits. ASOC solutions incorporate Application Security Testing (AST) tools into existing CI/CD pipelines, facilitating transparent and real-time collaboration between engineering teams and information security experts. These platforms offer orchestration capabilities, meaning they set up and execute security pipelines, as well as carry out correlation analysis of issues identified by AST tools, further aggregating this data for comprehensive insight. ASOC tools can generate documents and reports on security and associated business risks based on their analysis. By orchestrating and correlating within the DevSecOps framework, they handle an extensive array of data from development, testing, and security processes in real-time. This wealth of information enables a dynamic feedback loop with the platform, allowing for the intelligent oversight of the entire secure software lifecycle. Smart Control Setup Data analysis tools can be integrated into ASOC class platforms by developing an additional module dedicated to consolidating, storing, and analyzing the collected information. Here is how it is done: Gather data from software development and security scanning tools, then upload it into a dedicated data warehouse. Establish a set of metrics derived from the collected data. Incorporate business context into these metrics and identify key performance indicators (KPIs). Create dashboards to manage the DevSecOps platform using the original data, metrics, and KPIs. Artificial intelligence and machine learning are revolutionizing how we analyze collected data, enabling us to swiftly adapt to changes and refine the software delivery process. To leverage smart management of the ASOC platform, it is possible to tweak the implementation steps for the data-handling module. The initial three steps remain unchanged, but the fourth step involves employing AI and ML to process the raw data, metrics, and KPIs. This allows for the creation of dashboards that streamline the management of the DevSecOps platform based on this enhanced data analysis. Through the lens of ASOC practices, AI and ML significantly boost the efficiency of orchestration and correlation tasks. Orchestration Automated Software Quality Assurance AI within ASOC-class platforms has the smarts to dynamically set up the components and criteria needed at each checkpoint for assessing software quality, drawing from a pool of collected data and metrics. This AI-driven approach to defining quality control points lets you know if a build is primed for the next phase in its lifecycle. Leveraging AI, you can move artifacts through the DevSecOps pipeline with maximum automation. Decisions on progression are made after scanning builds in different environments, paving the way for swift and consistent releases. Automated quality control checkpoints can encompass various Application Security Testing practices. The configuration of these checkpoints can dynamically adapt depending on the stage of the security pipeline. As such, it is feasible to establish checkpoints within CI/CD pipelines and tailor their criteria, offering a powerful means to oversee and manage software quality. CI/CD Pipeline as Code For large-scale DevSecOps implementations, managing CI/CD pipelines as code presents clear benefits. Companies that adopt this strategy gain a powerful tool to enhance their software deployment, launch, management, and monitoring processes. Modern ASOC solutions enable the construction of security pipelines "out of the box" at the click of a button. AI and ML technologies improve this by automatically identifying software components and setting up CI/CD pipelines that meet exact quality standards. AI assists in cataloging software artifacts, automatically setting up end-to-end pipelines, and proactively integrating calls to information security tools, all while being guided by the context and various parameters of the product under development. AI technologies within ASOC frameworks also dynamically adjust the sequence and quantity of software quality control checkpoints within each CI/CD pipeline. This method significantly speeds up product releases, as the entire process - from the initial commit to the launch of the final version - is meticulously overseen. Correlation Application Vulnerability Correlation ASOC technologies enable the creation of an Application Vulnerability Correlation (AVC) mechanism that correlates security issues using data from software testing tools. This process involves an ML model that can automatically sift through the noise to eliminate false positives, spot duplicates, and similar security issues, and then consolidate them into a single identified defect. This mechanism significantly reduces the time needed to address security issues, allowing the team to concentrate on critical vulnerabilities and enhance the speed of threat detection in the developed software. Software Vulnerabilities Quick-Fix Guides Any set of detected issues always contains common vulnerabilities, including some critical ones, that can be fixed easily. AVC technology steps in to identify and rank information security vulnerabilities, offering automated advice on how to fix these issues. ASOC platforms collect vulnerability data from a range of security scanners, including SAST, SCA, DAST, and others. By integrating AVC technologies and providing them with comprehensive standards and detailed secure coding recommendations, it becomes possible to generate secure code templates. These templates are customized to align with the specifics of the company's DevSecOps implementation, further enhancing security measures. Security Compliance Management Simplified In software development, adhering to industry security standards and regulatory requirements is always a critical aspect. The process of managing these requirements can be fully automated within the product lifecycle, easing task execution within the company. Automated checks help ensure that all standards and requirements are met. With ASOC platforms, AI and ML technologies enable ongoing monitoring of security compliance, leveraging software quality checkpoints and predictive analytics. This monitoring provides the development team with a clear verdict on whether the developed software fulfills the necessary criteria. Evaluating the Return on Investment for ASOC Platforms Investing in ASOC platforms requires an assessment of the potential return on investment (ROI), which includes considerations of cost, time savings, and improved security. To evaluate ROI: Cost savings: Calculate the cost savings resulting from the reduced need for manual security testing and the potential reduction in security incidents and breaches. Time efficiency: Assess the time saved by automating security testing and integration within the CI/CD pipeline. Faster detection and remediation of vulnerabilities accelerate development cycles. Improved security: Consider the value of a stronger security posture, including the potential to avoid regulatory fines, protect brand reputation, and secure customer trust. Scalability: Evaluate the ability of ASOC platforms to scale with your development needs, potentially offering greater long-term value as your organization grows. Conclusion ASOC platforms are powerful tools for adopting DevSecOps, enabling companies to not only establish secure development processes but also automate them as much as possible. The integration of AI and ML significantly cuts down on manual work and speeds up the delivery of software to the market. ASOC tools are at the forefront of the DevSecOps evolution. They enable the resolution of security issues for software of any architecture and complexity without compromising delivery speed. However, not many organizations are aware of ASOC platforms. This leads many companies to stick with traditional, less scalable methods of implementing DevSecOps through isolated automation efforts. Despite this, the market already offers effective solutions that can significantly ease the workload of software professionals. ASOC platforms employing AI/ML technologies merge the analysis and management of security within existing DevOps workflows, considerably shortening the DevSecOps implementation timeline to just a few weeks.
Alternative Text: This comic humorously illustrates the concept of software development, particularly the idea of patching security vulnerabilities. The top panel shows a door labeled "xz" being patched up with planks of wood, symbolizing the closure of a backdoor vulnerability. In the bottom panel, while two characters celebrate the successful patching of the "xz" door, unbeknownst to them, another character casually walks through another open door nearby, suggesting that vulnerabilities may still exist elsewhere despite efforts to patch them.
The NIST AI RMF (National Institute of Standards and Technology Artificial Intelligence Risk Management Framework) provides a structured framework for identifying, assessing, and mitigating risks associated with artificial intelligence technologies, addressing complex challenges such as algorithmic bias, data privacy, and ethical considerations, thus helping organizations ensure the security, reliability, and ethical use of AI systems. How Do AI Risks Differ From Traditional Software Risks? AI risks differ from traditional software risks in several key ways: Complexity: AI systems often involve complex algorithms, machine learning models, and large datasets, which can introduce new and unpredictable risks. Algorithmic bias: AI systems can exhibit bias or discrimination based on factors such as the training data used to develop the models. This can result in unintended outcomes and consequences, which may not be part of traditional software systems. Opacity and lack of interpretability: AI algorithms, particularly deep learning models, can be opaque and difficult to interpret. This can make it challenging to understand how AI systems make decisions or predictions, leading to risks related to accountability, transparency, and trust. Data quality and bias: AI systems rely heavily on data, and issues such as data quality, incompleteness, and bias can significantly impact their performance and reliability. Traditional software may also rely on data, but the implications of data quality issues may be more noticeable in AI systems, affecting the accuracy, and effectiveness of AI-driven decisions. Adversarial attacks: AI systems may be vulnerable to adversarial attacks, where malicious actors manipulate inputs to deceive or manipulate the system's behavior. Adversarial attacks exploit vulnerabilities in AI algorithms and can lead to security breaches, posing distinct risks compared to traditional software security threats. Ethical and societal implications: AI technologies raise ethical and societal concerns that may not be as prevalent in traditional software systems. These concerns include issues such as privacy violations, job displacement, loss of autonomy, and reinforcement of biases. Regulatory and compliance challenges: AI technologies are subject to a rapidly evolving regulatory landscape, with new laws and regulations emerging to address AI-specific risks and challenges. Traditional software may be subject to similar regulations, but AI technologies often raise novel compliance issues related to fairness, accountability, transparency, and bias mitigation. Cost: The expense associated with managing an AI system exceeds that of regular software, as it often requires ongoing tuning to align with the latest models, training, and self-updating processes. Effectively managing AI risks requires specialized knowledge, tools, and frameworks tailored to the unique characteristics of AI technologies and their potential impact on individuals, organizations, and society as a whole. Key Considerations of the AI RMF The AI RMF refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. The AI RMF helps organizations effectively identify, assess, mitigate, and monitor risks associated with AI technologies throughout the lifecycle. It addresses various challenges, like data quality issues, model bias, adversarial attacks, algorithmic transparency, and ethical considerations. Key considerations include: Risk identification Risk assessment and prioritization Control selection and tailoring Implementation and integration Monitoring and evaluation Ethical and social implications Interdisciplinary collaboration Key Functions of the Framework Following are the essential functions within the NIST AI RMF that help organizations effectively identify, assess, mitigate, and monitor risks associated with AI technologies. Image courtesy of NIST AI RMF Playbook Govern Governance in the NIST AI RMF refers to the establishment of policies, processes, structures, and mechanisms to ensure effective oversight, accountability, and decision-making related to AI risk management. This includes defining roles and responsibilities, setting risk tolerance levels, establishing policies and procedures, and ensuring compliance with regulatory requirements and organizational objectives. Governance ensures that AI risk management activities are aligned with organizational priorities, stakeholder expectations, and ethical standards. Map Mapping in the NIST AI RMF involves identifying and categorizing AI-related risks, threats, vulnerabilities, and controls within the context of the organization's AI ecosystem. This includes mapping AI system components, interfaces, data flows, dependencies, and associated risks to understand the broader risk landscape. Mapping helps organizations visualize and prioritize AI-related risks, enabling them to develop targeted risk management strategies and allocate resources effectively. It may also involve mapping AI risks to established frameworks, standards, or regulations to ensure comprehensive coverage and compliance. Measurement Measurement in the NIST AI RMF involves assessing and quantifying AI-related risks, controls, and performance metrics to evaluate the effectiveness of risk management efforts. This includes conducting risk assessments, control evaluations, and performance monitoring activities to measure the impact of AI risks on organizational objectives and stakeholder interests. Measurement helps organizations identify areas for improvement, track progress over time, and demonstrate the effectiveness of AI risk management practices to stakeholders. It may also involve benchmarking against industry standards or best practices to identify areas for improvement and drive continuous improvement. Manage Management in the NIST AI RMF refers to the implementation of risk management strategies, controls, and mitigation measures to address identified AI-related risks effectively. This includes implementing selected controls, developing risk treatment plans, and monitoring AI systems' security posture and performance. Management activities involve coordinating cross-functional teams, communicating with stakeholders, and adapting risk management practices based on changing risk environments. Effective risk management helps organizations minimize the impact of AI risks on organizational objectives, stakeholders, and operations while maximizing the benefits of AI technologies. Key Components of the Framework The NIST AI RMF consists of two primary components: Foundational Information This part includes introductory materials, background information, and context-setting elements that provide an overview of the framework's purpose, scope, and objectives. It may include definitions, principles, and guiding principles relevant to managing risks associated with artificial intelligence (AI) technologies. Core and Profiles This part comprises the core set of processes, activities, and tasks necessary for managing AI-related risks, along with customizable profiles that organizations can tailor to their specific needs and requirements. The core provides a foundation for risk management, while profiles allow organizations to adapt the framework to their unique circumstances, addressing industry-specific challenges, regulatory requirements, and organizational priorities. Significance of AI RMF Based on Roles Benefits for Developers Guidance on risk management: The AI RMF provides developers with structured guidance on identifying, assessing, mitigating, and monitoring risks associated with AI technologies. Compliance with standards and regulations: The AI RMF helps developers ensure compliance with relevant standards, regulations, and best practices governing AI technologies. By referencing established NIST guidelines, such as NIST SP 800-53, developers can identify applicable security and privacy controls for AI systems. Enhanced security and privacy: By incorporating security and privacy controls recommended in the AI RMF, developers can mitigate the risks of data breaches, unauthorized access, and other security threats associated with AI systems. Risk awareness and mitigation: The AI RMF raises developers' awareness of potential risks and vulnerabilities inherent in AI technologies, such as data quality issues, model bias, adversarial attacks, and algorithmic transparency. Cross-disciplinary collaboration: The AI RMF emphasizes the importance of interdisciplinary collaboration between developers, cybersecurity experts, data scientists, ethicists, legal professionals, and other stakeholders in managing AI-related risks. Quality assurance and testing: The AI RMF encourages developers to incorporate risk management principles into the testing and validation processes for AI systems. Benefits for Architects Designing secure and resilient systems: Architects play a crucial role in designing the architecture of AI systems. By incorporating principles and guidelines from the AI RMF into the system architecture, architects can design AI systems that are secure, resilient, and able to effectively manage risks associated with AI technologies. This includes designing robust data pipelines, implementing secure APIs, and integrating appropriate security controls to mitigate potential vulnerabilities. Ensuring compliance and governance: Architects are responsible for ensuring that AI systems comply with relevant regulations, standards, and organizational policies. By integrating compliance requirements into the system architecture, architects can ensure that AI systems adhere to legal and ethical standards while protecting sensitive information and user privacy. Addressing ethical and societal implications: Architects need to consider the ethical and societal implications of AI technologies when designing system architectures. Architects can leverage the AI RMF to incorporate mechanisms for ethical decision-making, algorithmic transparency, and user consent into the system architecture, ensuring that AI systems are developed and deployed responsibly. Supporting continuous improvement: The AI RMF promotes a culture of continuous improvement in AI risk management practices. Architects can leverage the AI RMF to establish mechanisms for monitoring and evaluating the security posture and performance of AI systems over time. Comparison of AI Risk Frameworks Framework Strengths Weaknesses NIST AI RMF Comprehensive coverage of AI-specific risks Integration with established NIST cybersecurity guidelines Interdisciplinary approach Alignment with regulatory requirements Emphasis on continuous improvement May require customization to address specific organizational needs Focus on the US-centric regulatory landscape ISO/IEC 27090 Widely recognized international standards ISO/IEC 27090 is designed to integrate seamlessly with ISO/IEC 27001, the international standard for information security management systems (ISMS). Provides comprehensive guidance on managing risks associated with AI technologies The standard follows a structured approach, incorporating the Plan-Do-Check-Act (PDCA) cycle. Lack of specificity in certain areas, as it aims to provide general guidance applicable to a wide range of organizations and industries Implementing ISO/IEC 27090 can be complex, particularly for organizations that are new to information security management or AI risk management. The standard's comprehensive nature and technical language may require significant expertise and resources to understand and implement effectively. IEEE P7006 Focus on data protection considerations in AI systems, particularly those related to personal data Comprehensive guidelines for ensuring privacy, fairness, transparency, and accountability Limited scope to personal data protection May not cover all aspects of AI risk management Fairness, Accountability, and Transparency (FAT) Framework Emphasis on ethical dimensions of AI, including fairness, accountability, transparency, and explainability Provides guidelines for evaluating and mitigating ethical risks Not a comprehensive risk management framework May lack detailed guidance on technical security controls IBM AI Governance Framework Focus on governance aspects of AI projects Covers various aspects of AI lifecycle, including data management, model development, deployment, and monitoring Emphasis on transparency, fairness, and trustworthiness Developed by a specific vendor and may be perceived as biased May not fully address regulatory requirements beyond IBM's scope Google AI Principles Clear principles for ethical AI development and deployment Emphasis on fairness, privacy, accountability, and societal impact Provides guidance for responsible AI practices Not a comprehensive risk management framework Lacks detailed implementation guidance AI Ethics Guidelines from Industry Consortia Developed by diverse stakeholders, including industry, academia, and civil society Provides a broad perspective on ethical AI considerations Emphasis on collaboration and knowledge sharing Not comprehensive risk management frameworks May lack detailed implementation guidance Conclusion The NIST AI Risk Management Framework offers a comprehensive approach to addressing the complex challenges associated with managing risks in artificial intelligence (AI) technologies. Through its foundational information and core components, the framework provides organizations with a structured and adaptable methodology for identifying, assessing, mitigating, and monitoring risks throughout the AI lifecycle. By leveraging the principles and guidelines outlined in the framework, organizations can enhance the security, reliability, and ethical use of AI systems while ensuring compliance with regulatory requirements and stakeholder expectations. However, it's essential to recognize that effectively managing AI-related risks requires ongoing diligence, collaboration, and adaptation to evolving technological and regulatory landscapes. By embracing the NIST AI RMF as a guiding framework, organizations can navigate the complexities of AI risk management with confidence and responsibility, ultimately fostering trust and innovation in the responsible deployment of AI technologies.
Managing your secrets well is imperative in software development. It's not just about avoiding hardcoding secrets into your code, your CI/CD configurations, and more. It's about implementing tools and practices that make good secrets management almost second nature. A Quick Overview of Secrets Management What is a secret? It's any bit of code, text, or binary data that provides access to a resource or data that should have restricted access. Almost every software development process involves secrets: credentials for your developers to access your version control system (VCS) like GitHub, credentials for a microservice to access a database, and credentials for your CI/CD system to push new artifacts to production. There are three main elements to secrets management: How are you making them available to the people/resources that need them? How are you managing the lifecycle/rotation of your secrets? How are you scanning to ensure that the secrets are not being accidentally exposed? We'll look at elements one and two in terms of the secrets managers in this article. For element three, well, I'm biased toward GitGuardian because I work there (disclaimer achieved). Accidentally exposed secrets don't necessarily get a hacker into the full treasure trove, but even if they help a hacker get a foot in the door, it's more risk than you want. That's why secrets scanning should be a part of a healthy secrets management strategy. What To Look for in a Secrets Management Tool In the Secrets Management Maturity Model, hardcoding secrets into code in plaintext and then maybe running a manual scan for them is at the very bottom. Manually managing unencrypted secrets, whether hardcoded or in a .env file, is considered immature. To get to an intermediate level, you need to store them outside your code, encrypted, and preferably well-scoped and automatically rotated. It's important to differentiate between a key management system and a secret management system. Key management systems are meant to generate and manage cryptographic keys. Secrets managers will take keys, passwords, connection strings, cryptographic salts, and more, encrypt and store them, and then provide access to them for personnel and infrastructure in a secure manner. For example, AWS Key Management Service (KMS) and AWS Secrets Manager (discussed below) are related but are distinct brand names for Amazon. Besides providing a secure way to store and provide access to secrets, a solid solution will offer: Encryption in transit and at rest: The secrets are never stored or transmitted unencrypted. Automated secrets rotation: The tool can request changes to secrets and update them in its files in an automated manner on a set schedule. Single source of truth: The latest version of any secret your developers/resources need will be found there, and it is updated in real-time as keys are rotated. Role/identity scoped access: Different systems or users are granted access to only the secrets they need under the principle of least privilege. That means a microservice that accesses a MongoDB instance only gets credentials to access that specific instance and can't pull the admin credentials for your container registry. Integrations and SDKs: The service has APIs with officially blessed software to connect common resources like CI/CD systems or implement access in your team's programming language/framework of choice. Logging and auditing: You need to check your systems periodically for anomalous results as a standard practice; if you get hacked, the audit trail can help you track how and when each secret was accessed. Budget and scope appropriate: If you're bootstrapping with 5 developers, your needs will differ from those of a 2,000-developer company with federal contracts. Being able to pay for what you need at the level you need it is an important business consideration. The Secrets Manager List Cyberark Conjur Secrets Manager Enterprise Conjur was founded in 2011 and was acquired by Cyberark in 2017. It's grown to be one of the premiere secrets management solutions thanks to its robust feature set and large number of SDKs and integrations. With Role Based Access Controls (RBAC) and multiple authentication mechanisms, it makes it easy to get up and running using existing integrations for top developer tools like Ansible, AWS CloudFormation, Jenkins, GitHub Actions, Azure DevOps, and more. You can scope secrets access to the developers and systems that need the secrets. For example, a Developer role that accesses Conjur for a database secret might get a connection string for a test database when they're testing their app locally, while the application running in production gets the production database credentials. The Cyberark site boasts an extensive documentation set and robust REST API documentation to help you get up to speed, while their SDKs and integrations smooth out a lot of the speed bumps. In addition, GitGuardian and CyberArk have partnered to create a bridge to integrate CyberArk Conjur and GitGuardian's Has My Secrets Leaked. This is now available as an open-source project on GitHub, providing a unique solution for security teams to detect leaks and manage secrets seamlessly. Google Cloud Secret Manager When it comes to choosing Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure (Azure), it's usually going to come down to where you're already investing your time and money. In a multi-cloud architecture, you might have resources spread across the three, but if you're automatically rotating secrets and trying to create consistency for your services, you'll likely settle on one secrets manager as a single source of truth for third-party secrets rather than spreading secrets across multiple services. While Google is behind Amazon and Microsoft in market share, it sports the features you expect from a service competing for that market, including: Encryption at rest and in transit for your secrets CLI and SDK access to secrets Logging and audit trails Permissioning via IAM CI/CD integrations with GitHub Actions, Hashicorp Terraform, and more. Client libraries for eight popular programming languages. Again, whether to choose it is more about where you're investing your time and money rather than a killer function in most cases. AWS Secrets Manager Everyone with an AWS certification, whether developer or architect, has heard of or used AWS Secrets Manager. It's easy to get it mixed up with AWS Key Management System (KMS), but the Secrets Manager is simpler. KMS creates, stores, and manages cryptographic keys. Secrets Manager lets you put stuff in a vault and retrieve it when needed. A nice feature of AWS Secrets Manager is that it can connect with a CI/CD tool like GitHub actions through OpenID Connect (OIDC), and you can create different IAM roles with tightly scoped permissions, assigning them not only to individual repositories but specific branches. AWS Secrets Manager can store and retrieve non-AWS secrets as well as use the roles to provide access to AWS services to a CI/CD tool like GitHub Actions. Using AWS Lambda, key rotation can be automated, which is probably the most efficient way, as the key is updated in the secrets manager milliseconds after it's changed, producing the minimum amount of disruption. As with any AWS solution, it's a good idea to create multi-region or multi-availability-zone replicas of your secrets, so if your secrets are destroyed by a fire or taken offline by an absent-minded backhoe operator, you can fail over to a secondary source automatically. At $0.40 per secret per month, it's not a huge cost for added resiliency. Azure Key Vault Azure is the #2 player in the cloud space after AWS. Their promotional literature touts their compatibility with FIPS 140-2 standards and Hardware Security Modules (HSMs), showing they have a focus on customers who are either government agencies or have business with government agencies. This is not to say that their competitors are not suitable for government or government-adjacent solutions, but that Microsoft pushes that out of the gate as a key feature. Identity-managed access, auditability, differentiated vaults, and encryption at rest and in transit are all features they share with competitors. As with most Microsoft products, it tries to be very Microsoft and will more than likely appeal more to .Net developers who use Microsoft tools and services already. While it does offer a REST API, the selection of officially blessed client libraries (Java, .Net, Spring, Python, and JavaScript) is thinner than you'll find with AWS or GCP. As noted in the AWS and GCP entries, a big factor in your decision will be which cloud provider is getting your dominant investment of time and money. And if you're using Azure because you're a Microsoft shop with a strong investment in .Net, then the choice will be obvious. Doppler While CyberArk's Conjur (discussed above) started as a solo product that was acquired and integrated into a larger suite, Doppler currently remains a standalone key vault solution. That might be attractive for some because it's cloud-provider agnostic, coding language agnostic, and has to compete on its merits instead of being the default secrets manager for a larger package of services. It offers logging, auditing, encryption at rest and in transit, and a list of integrations as long as your arm. Besides selling its abilities, it sells its SOC compliance and remediation functionalities on the front page. When you dig deeper, there's a list of integrations as long as your arm testifies to its usefulness for integrating with a wide variety of services, and its list of SDKs is more robust than Azure's. It seems to rely strongly on injecting environment variables, which can make a lot of your coding easier at the cost of the environment variables potentially ending up in run logs or crash dumps. Understanding how the systems with which you're using it treat environment variables, scope them, and the best ways to implement it with them will be part of the learning curve in adopting it. Infisical Like Doppler, Infisical uses environment variable injection. Similar to the Dotenv package for Node, when used in Node, it injects them at run time into the process object of the running app so they're not readable by any other processes or users. They can still be revealed by a crash dump or logging, so that is a caveat to consider in your code and build scripts. Infisical offers other features besides a secrets vault, such as configuration sharing for developer teams and secrets scanning for your codebase, git history, and as a pre-commit hook. You might ask why someone writing for GitGuardian would mention a product with a competing feature. Aside from the scanning, their secrets and configuration vault/sharing model offers virtual secrets, over 20 cloud integrations, nine CI/CD integrations, over a dozen framework integrations, and SDKs for four programming languages. Their software is mostly open-source, and there is a free tier, but features like audit logs, RBAC, and secrets rotation are only available to paid subscribers. Akeyless AKeyless goes all out features, providing a wide variety of authentication and authorization methods for how the keys and secrets it manages can be accessed. It supports standards like RBAC and OIDC as well as 3rd party services like AWS IAM and Microsoft Active Directory. It keeps up with the competition in providing encryption at rest and in transit, real-time access to secrets, short-lived secrets and keys, automated rotation, and auditing. It also provides features like just-in-time zero trust access, a password manager for browser-based access control as well as password sharing with short-lived, auto-expiring passwords for third parties that can be tracked and audited. In addition to 14 different authentication options, it offers seven different SDKs and dozens of integrations for platforms ranging from Azure to MongoDB to Remote Desktop Protocol. They offer a reasonable free tier that includes 3-days of log retention (as opposed to other platforms where it's a paid feature only). 1Password You might be asking, "Isn't that just a password manager for my browser?" If you think that's all they offer, think again. They offer consumer, developer, and enterprise solutions, and what we're going to look at is their developer-focused offering. Aside from zero-trust models, access control models, integrations, and even secret scanning, one of their claims that stands out on the developer page is "Go ahead – commit your .env files with confidence." This stands out because .env files committed to source control are a serious source of secret sprawl. So, how are they making that safe? You're not putting secrets into your .env files. Instead, you're putting references to your secrets that allow them to be loaded from 1Password using their services and access controls. This is somewhat ingenious as it combines a format a lot of developers know well with 1Password's access controls. It's not plug-and-play and requires a bit of a learning curve, but familiarity doesn't always breed contempt. Sometimes it breeds confidence. While it has a limited number of integrations, it covers some of the biggest Kubernetes and CI/CD options. On top of that, it has dozens and dozens of "shell plugins" that help you secure local CLI access without having to store plaintext credentials in ~/.aws or another "hidden" directory. And yes, we mentioned they offer secrets scanning as part of their offering. Again, you might ask why someone writing for GitGuardian would mention a product with a competing feature. HashiCorp Vault HashiCorp Vault offers secrets management, key management, and more. It's a big solution with a lot of features and a lot of options. Besides encryption, role/identity-based secrets access, dynamic secrets, and secrets rotation, it offers data encryption and tokenization to protect data outside the vault. It can act as an OIDC provider for back-end connections as well as sporting a whopping seventy-five integrations in its catalog for the biggest cloud and identity providers. It's also one of the few to offer its own training and certification path if you want to add being Hashi Corp Vault certified to your resume. It has a free tier for up to 25 secrets and limited features. Once you get past that, it can get pricey, with monthly fees of $1,100 or more to rent a cloud server at an hourly rate. In Summary Whether it's one of the solutions we recommended or another solution that meets our recommendations of what to look for above, we strongly recommend integrating a secret management tool into your development processes. If you still need more convincing, we'll leave you with this video featuring GitGuardian's own Mackenzie Jackson.
Statelessness in RESTful applications poses challenges and opportunities, influencing how we manage fundamental security aspects such as authentication and authorization. This blog aims to delve into this topic, explore its impact, and offer insights into the best practices for handling stateless REST applications. Understanding Statelessness in REST REST, or REpresentational State Transfer, is an architectural style that defines a set of constraints for creating web services. One of its core principles is statelessness, which means that each request from a client to a server must contain all the information needed to understand and process the request. This model stands in contrast to stateful approaches, where the server stores user session data between requests. The stateless nature of REST brings significant benefits, particularly in terms of scalability and reliability. By not maintaining a state between requests, RESTful services can handle requests independently, allowing for more efficient load balancing and reduced server memory requirements. However, this approach introduces complexities in managing user authentication and authorization. Authentication in Stateless REST Applications Token-Based Authentication The most common approach to handling authentication in stateless REST applications is through token-based methods, like JSON Web Tokens (JWT). In this model, the server generates a token that encapsulates user identity and attributes when they log in. This token is then sent to the client, which will include it in the HTTP header of subsequent requests. Upon receiving a request, the server decodes the token to verify user identity. Finally, the authorization service can make decisions based on the user permissions. // Example of a JWT token in an HTTP header Authorization: Bearer <token> OAuth 2.0 Another widely used framework is OAuth 2.0, particularly for applications requiring third-party access. OAuth 2.0 allows users to grant limited access to their resources from another service without exposing their credentials. It uses access tokens, providing layered security and enabling scenarios where an application needs to act on behalf of the user. Authorization in Stateless REST Applications Once authentication is established, the next challenge is authorization — checking the user has permission to perform the relevant actions on resources. Keeping REST applications stateless requires decoupling policy and code. In traditional stateful applications, authorization decisions are made in imperative code statements that clutter the application logic and rely on the state of the request. In a stateless application, policy logic should be separated from the application code and be defined separately as policy code (using policy as code engines and languages), thus keeping the application logic stateless. Here are some examples of stateless implementation of common policy models: Role-Based Access Control (RBAC) Role-Based Access Control (RBAC) is a common pattern where users are assigned roles that dictate the access level a user has to resources. When decoupling policy from the code, the engine syncs the user roles from the identity provider. By providing the JWT with the identity, the policy engine can return a decision on whether a role is allowed to perform the action or not. Attribute-Based Access Control (ABAC) A more dynamic approach is Attribute-Based Access Control (ABAC), which evaluates a set of policies against the attributes of users, resources, and the environment. This model offers more granular control and flexibility, which is particularly useful in complex systems with varying access requirements. To keep REST applications stateless, it is necessary to declare these policies in a separate code base as well as ensure that the data synchronization with the engine is stateless. Relationship-Based Access Control (ReBAC) In applications where data privacy is of top importance, and users can have ownership of their data by declaring relationships, Using a centralized graph outside of the REST application is necessary to maintain the statelessness of the application logic. A well-crafted implementation of an authorization service will have the application throw a stateless check function with the identity and resource instance. Then, the authorization service will analyze it based on the stateful graph separated from the application. Security Considerations in Stateless Authentication and Authorization Handling Token Security In stateless REST applications, token security is critical, and developers must ensure that tokens are encrypted and transmitted securely. The use of HTTPS is mandatory to prevent token interception. Additionally, token expiration mechanisms must be implemented to reduce the risk of token hijacking. It’s a common practice to have short-lived access tokens and longer-lived refresh tokens to balance security and user convenience. Preventing CSRF and XSS Attacks Cross-Site Request Forgery (CSRF) and Cross-Site Scripting (XSS) are two prevalent security threats in web applications. Using tokens instead of cookies in stateless REST APIs can inherently mitigate CSRF attacks, as the browser does not automatically send the token. However, developers must still be vigilant about XSS attacks, which can compromise token security. Implementing Content Security Policy (CSP) headers and sanitizing user input are effective strategies against XSS. Performance Implications Caching Strategies Statelessness in REST APIs poses unique challenges for caching, as user-specific data cannot be stored on the server. Leveraging HTTP cache headers effectively allows clients to cache responses appropriately, reducing the load on the server and improving response times. ETag headers and conditional requests can optimize bandwidth usage and enhance overall application performance. Load Balancing and Scalability Stateless applications are inherently more scalable as they allow for straightforward load balancing. Since there’s no session state tied to a specific server, any server can handle any request. This property enables seamless horizontal scaling, which is essential for applications anticipating high traffic volumes. Conclusion: Balancing Statelessness With Practicality Implementing authentication and authorization in stateless REST applications involves a careful balance between security, performance, and usability. While statelessness offers numerous advantages in terms of scalability and simplicity, it also necessitates robust security measures and thoughtful system design. The implications of token-based authentication, access control mechanisms, security threats, and performance strategies must be considered to build effective and secure RESTful services.
In fintech application mobile apps or the web, deploying new features in areas like loan applications requires careful validation. Traditional testing with real user data, especially personally identifiable information (PII), presents significant challenges. Synthetic transactions offer a solution, enabling the thorough testing of new functionalities in a secure and controlled environment without compromising sensitive data. By simulating realistic user interactions within the application, synthetic transactions enable developers and QA teams to identify potential issues in a controlled environment. Synthetic transactions help in ensuring that every aspect of a financial application functions correctly after any major updates or new features are rolled out. In this article, we delve into one of the approaches for using synthetic transactions. Synthetic Transactions for Financial Applications Key Business Entity At the heart of every financial application lies a key entity, be it a customer, user, or loan application itself. This entity is often defined by a unique identifier, serving as the cornerstone for transactions and operations within the system. The inception point of this entity, when it is first created, presents a strategic opportunity to categorize it as either synthetic or real. This categorization is critical, as it determines the nature of interactions the entity will undergo. Marking an entity as synthetic or for test purposes from the outset allows for a clear delineation between test and real data within the application's ecosystem. Subsequently, all transactions and operations conducted with this entity can be safely recognized as part of synthetic transactions. This approach ensures that the application's functionality can be thoroughly tested in a realistic environment. Intercepting and Managing Synthetic Transactions A critical component of implementing synthetic transactions lies in the interception and management of these transactions at the HTTP request level. Utilizing Spring's HTTP Interceptor mechanism, we can discern and process synthetic transactions by examining specific HTTP headers. The below visual outlines the coordination between a synthetic HTTP interceptor and a state manager in managing the execution of an HTTP request: Figure 1: Synthetic HTTP interceptor and state manager The SyntheticTransactionInterceptor acts as the primary gatekeeper, ensuring that only transactions identified as synthetic are allowed through the testing pathways. Below is the implementation: Java @Component public class SyntheticTransactionInterceptor implements HandlerInterceptor { protected final Logger logger = LoggerFactory.getLogger(this.getClass()); @Autowired SyntheticTransactionService syntheticTransactionService; @Autowired SyntheticTransactionStateManager syntheticTransactionStateManager; @Override public boolean preHandle(HttpServletRequest request,HttpServletResponse response, Object object) throws Exception { String syntheticTransactionId = request.getHeader("x-synthetic-transaction-uuid"); if (syntheticTransactionId != null && !syntheticTransactionId.isEmpty()){ if (this.syntheticTransactionService.validateTransactionId(syntheticTransactionId)){ logger.info(String.format("Request initiated for synthetic transaction with transaction id:%s", syntheticTransactionId)); this.syntheticTransactionStateManager.setSyntheticTransaction(true); this.syntheticTransactionStateManager.setTransactionId(syntheticTransactionId); } } return true; } } In this implementation, the interceptor looks for a specific HTTP header (x-synthetic-transaction-uuid) carrying a UUID. This UUID is not just any identifier but a validated, time-limited key designated for synthetic transactions. The validation process includes checks on the UUID's validity, its lifespan, and whether it has been previously used, ensuring a level of security and integrity for the synthetic testing process. After a synthetic ID is validated by the SyntheticTransactionInterceptor, the SyntheticTransactionStateManager plays a pivotal role in maintaining the synthetic context for the current request. The SyntheticTransactionStateManager is designed with request scope in mind, meaning its lifecycle is tied to the individual HTTP request. This scoping is essential for preserving the integrity and isolation of synthetic transactions within the application's broader operational context. By tying the state manager to the request scope, the application ensures that synthetic transaction states do not bleed over into unrelated operations or requests. Below is the implementation of the synthetic state manager: Java @Component @RequestScope public class SyntheticTransactionStateManager { private String transactionId; private boolean syntheticTransaction; public String getTransactionId() { return transactionId; } public void setTransactionId(String transactionId) { this.transactionId = transactionId; } public boolean isSyntheticTransaction() { return syntheticTransaction; } public void setSyntheticTransaction(boolean syntheticTransaction) { this.syntheticTransaction = syntheticTransaction; } } When we persist the key entity, be it a customer, user, or loan application—the application's service layer or repository layer consults the SyntheticTransactionStateManager to confirm the transaction's synthetic nature. If the transaction is indeed synthetic, the application proceeds to persist not only the synthetic identifier but also an indicator that the entity itself is synthetic. This sets the foundations for the synthetic transaction flow. This approach ensures that from the moment an entity is marked as synthetic, all related operations and future APIs, whether they involve data processing or business logic execution, are conducted in a controlled manner. For further API calls initiated from the financial application, upon reaching the microservice, we load the application context for that specific request based on the token or entity identifier provided. During the context loading, we ascertain whether the key business entity (e.g., loan application, user/customer) is synthetic. If affirmative, we then set the state manager's syntheticTransaction flag to true and also assign the synthetic transactionId from the application context. This approach negates the need to pass a synthetic transaction ID header for subsequent calls within the application flow. We only need to send a synthetic transaction ID during the initial API call that creates the key business entity. Since this step involves using explicit headers that may not be supported by the financial application, whether it's a mobile or web platform, we can manually make this first API call with Postman or a similar tool. Afterwards, the application can continue with the rest of the flow in the financial application itself. Beyond managing synthetic transactions within the application, it's also crucial to consider how external third-party API calls behave within the context of the synthetic transaction. External Third-Party API Interactions In financial applications handling key entities with personally identifiable information (PII), we conduct validations and fraud checks on user-provided data, often leveraging external third-party services. These services are crucial for tasks such as PII validation and credit bureau report retrieval. However, when dealing with synthetic transactions, we cannot make calls to these third-party services. The solution involves creating mock responses or utilizing stubs for these external services during synthetic transactions. This approach ensures that while synthetic transactions undergo the same processing logic as real transactions, they do so without the need for actual data submission to third-party services. Instead, we simulate the responses that these services would provide if they were called with real data. This allows us to thoroughly test the integration points and data-handling logic of our application. Below is the code snippet for pulling the bureau report. This call happens as part of the API call where the key entity is been created, and then subsequently we pull the applicant's bureau report: Java @Override @Retry(name = "BUREAU_PULL", fallbackMethod = "getBureauReport_Fallback") public CreditBureauReport getBureauReport(SoftPullParams softPullParams, ErrorsI error) { CreditBureauReport result = null; try { Date dt = new Date(); logger.info("UWServiceImpl::getBureauReport method call at :" + dt.toString()); CreditBureauReportRequest request = this.getCreditBureauReportRequest(softPullParams); RestTemplate restTemplate = this.externalApiRestTemplateFactory.getRestTemplate(softPullParams.getUserLoanAccountId(), "BUREAU_PULL", softPullParams.getAccessToken(), "BUREAU_PULL", error); HttpHeaders headers = this.getHttpHeaders(softPullParams); HttpEntity<CreditBureauReportRequest> entity = new HttpEntity<>(request, headers); long startTime = System.currentTimeMillis(); String uwServiceEndPoint = "/transaction"; String bureauCallUrl = String.format("%s%s", appConfig.getUnderwritingTransactionApiPrefix(), uwServiceEndPoint); if (syntheticTransactionStateManager.isSyntheticTransaction()) { result = this.syntheticTransactionService.getPayLoad(syntheticTransactionStateManager.getTransactionId(), "BUREAU_PULL", CreditBureauReportResponse.class); result.setCustomerId(softPullParams.getUserAccountId()); result.setLoanAccountId(softPullParams.getUserLoanAccountId()); } else { ResponseEntity<CreditBureauReportResponse> responseEntity = restTemplate.exchange(bureauCallUrl, HttpMethod.POST, entity, CreditBureauReportResponse.class); result = responseEntity.getBody(); } long endTime = System.currentTimeMillis(); long timeDifference = endTime - startTime; logger.info("Time taken for API call BUREAU_PULL/getBureauReport call 1: " + timeDifference); } catch (HttpClientErrorException exception) { logger.error("HttpClientErrorException occurred while calling BUREAU_PULL API, response string: " + exception.getResponseBodyAsString()); throw exception; } catch (HttpStatusCodeException exception) { logger.error("HttpStatusCodeException occurred while calling BUREAU_PULL API, response string: " + exception.getResponseBodyAsString()); throw exception; } catch (Exception ex) { logger.error("Error occurred in getBureauReport. Detail error:", ex); throw ex; } return result; } The code snippet above is quite elaborate, but we don't need to get into the details of that. What we need to focus on is the code snippet below: Java if (syntheticTransactionStateManager.isSyntheticTransaction()) { result = this.syntheticTransactionService.getPayLoad(syntheticTransactionStateManager.getTransactionId(), "BUREAU_PULL", CreditBureauReportResponse.class); result.setCustomerId(softPullParams.getUserAccountId()); result.setLoanAccountId(softPullParams.getUserLoanAccountId()); } else { ResponseEntity<CreditBureauReportResponse> responseEntity = restTemplate.exchange(bureauCallUrl, HttpMethod.POST, entity, CreditBureauReportResponse.class); result = responseEntity.getBody(); } It checks for the synthetic transaction with the SyntheticTransactionStateManager. If true, then instead of going to a third party, it calls the internal service SyntheticTransactionService to get the Synthetic Bureau report data. Synthetic Data Service Synthetic data service SyntheticTransactionServiceImpl is a general utility service whose responsibility is to pull the synthetic data from the data store, parse it, and convert it to the object type that is been passed as part of the parameter. Below is the implementation for the service: Java @Service @Qualifier("syntheticTransactionServiceImpl") public class SyntheticTransactionServiceImpl implements SyntheticTransactionService { private final Logger logger = LoggerFactory.getLogger(this.getClass()); @Autowired SyntheticTransactionRepository syntheticTransactionRepository; @Override public <T> T getPayLoad(String transactionUuid, String extPartnerServiceType, Class<T> responseType) { T payload = null; try { SyntheticTransactionPayload syntheticTransactionPayload = this.syntheticTransactionRepository.getSyntheticTransactionPayload(transactionUuid, extPartnerServiceType); if (syntheticTransactionPayload != null && syntheticTransactionPayload.getPayload() != null){ ObjectMapper objectMapper = new ObjectMapper() .configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); payload = objectMapper.readValue(syntheticTransactionPayload.getPayload(), responseType); } } catch (Exception ex){ logger.error("An error occurred while getting the synthetic transaction payload, detail error:", ex); } return payload; } @Override public boolean validateTransactionId(String transactionId) { boolean result = false; try{ if (transactionId != null && !transactionId.isEmpty()) { if (UUID.fromString(transactionId).toString().equalsIgnoreCase(transactionId)) { //Removed other validation checks, this could be financial application specific check. } } } catch (Exception ex){ logger.error("SyntheticTransactionServiceImpl::validateTransactionId - An error occurred while validating the synthetic transaction id, detail error:", ex); } return result; } With the generic method getPayLoad(), we provide a high degree of reusability, capable of returning various types of synthetic responses. This reduces the need for multiple, specific mock services for different external interactions. For storing the different payloads for different types of external third-party services, we use a generic table structure as below: MySQL CREATE TABLE synthetic_transaction ( id int NOT NULL AUTO_INCREMENT, transaction_uuid varchar(36) ext_partner_service varchar(30) payload mediumtext create_date datetime DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (id) ); ext_partner_service: This is an external service identifier for which we pull the payload from the table. In this above example for bureau report, it would be BUREAU_PULL. Conclusion In our exploration of synthetic transactions within fintech applications, we've highlighted their role in enhancing the reliability, and integrity of fintech solutions. By leveraging synthetic transactions, we simulate realistic user interactions while circumventing the risks tied to handling real personally identifiable information (PII). This approach enables our developers and QA teams to rigorously test new functionalities and updates in a secure, controlled environment. Moreover, our strategy in integrating synthetic transactions through mechanisms such as HTTP interceptors and state managers showcases a versatile approach applicable across a wide array of applications. This method not only simplifies the incorporation of synthetic transactions but also significantly boosts reusability, alleviating the need to devise unique workflows for each third-party service interaction. This approach significantly enhances the reliability and security of financial application solutions, ensuring that new features can be deployed with confidence.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. The era of digital transformation has brought about the need for faster, efficient, and more secure software development processes. Enter DevSecOps: a philosophy that integrates security practices into DevOps processes and aims to embed security into every stage of the development lifecycle — from the writing of code to application deployment in production. The incorporation of DevSecOps can lead to numerous benefits such as early identification of vulnerabilities, cost savings, and faster delivery times. Shift-Left Principle The term "shift left" refers to shifting the focus on security checks and controls toward the beginning, or "left," of the software development lifecycle (SDLC). Traditionally, security checks were performed toward the end, or "right," of the SDLC, often leading to vulnerabilities being detected late in the process, whilst the application is already deployed in production and such vulnerabilities are more expensive and time-consuming to fix. The shift-left principle offers numerous benefits: Early detection of vulnerabilities – By integrating security checks earlier in the SDLC, vulnerabilities can be detected and addressed sooner. This reduces the risk of security breaches and ensures a more secure product. Reduced costs – Addressing security issues late in the development process can be costly. By shifting left, these issues are identified and repaired early, reducing the associated costs and resources required. Improved compliance – With security integrated from the outset, it's easier to ensure compliance with industry regulations and standards. Enhanced product quality – A product built with security in mind from the beginning is likely to be of higher quality with fewer bugs and vulnerabilities. Faster time to market – By reducing the time spent on fixing security issues at later stages, products can be delivered to the market faster. This integration ensures that testing becomes an intrinsic part of the development organization's DNA, fostering a culture where software is meticulously crafted with quality considerations ingrained from the inception of the project. Figure 1. Shifting security controls to the left Key Considerations for DevSecOps Implementation Implementing DevSecOps successfully requires careful consideration of key factors that contribute to a secure and efficient development pipeline. This integration of DevSecOps into the CI/CD pipeline allows for early detection of security issues, reducing the likelihood of vulnerabilities making their way into production while also allowing developers to quickly fix these issues and learn how to avoid reproducing them in the future. Automated Security Testing Tools Because applications come in different forms (e.g., mobile, web, thick client, containerized), you may need to set up different types of controls — and even different types of tooling to secure each component of your application. Let's review the main types of tests you should use. Static Application Security Testing Static application security testing (SAST) tools analyze an application's source code (the code written by your developers) for potential vulnerabilities without executing the program. By scanning the codebase during the development phase, SAST provides developers with insights into security flaws and coding errors. A good SAST tool can detect code smells as well as any bad practices that could lead to vulnerabilities such as SQL or path injection, buffer overflow, XSS, and input validation. Software Composition Analysis Software composition analysis (SCA) is critical for identifying and managing security risks associated with open-source components used in software development, generally coming from additional packages (e.g., NPM packets for JavaScript, NuGet for .NET, Maven, gems). Most developers load a package when they need one but never check if the package has a known vulnerability. An SCA tool will warn you when your application is using a vulnerable package as well as when a fix already exists but you are not using the fixed version of the dependency. Dynamic Application Security Testing Dynamic application security testing (DAST) tools assess applications in their running state, simulating real-world attacks to identify vulnerabilities. By incorporating DAST into the testing process, DevSecOps teams can uncover security weaknesses that may not be apparent during static analysis. A DAST tool will act like a fully automated penetration testing tool that will test for major known vulnerabilities (OWASP) and for a lot of other bad practices such as information leak/exposure. Interactive Application Security Testing Interactive application security testing (IAST) tooling is the combination of a DAST tool and a SAST tool because by allowing access to the source code ("gray boxing"), it will help the DAST perform better but also limit the number of false positives. IAST is super efficient but more challenging to set up because it tends to deeply test each application. Container Scanner Containers offer agility and scalability yet also introduce unique security challenges. For example, if your application is containerized, you must perform additional controls. Mainly, scanners will analyze your Dockerfile to check if the base image contains known vulnerabilities, and they will also look for bad practices such as running as root, using the "latest" tag, or exposing dangerous ports. The following Dockerfile example contains at least three bad practices, and it may contain a vulnerability in the Node.js base image: Shell FROM node:latest WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 22 HEALTHCHECK CMD curl --fail http://localhost:3000 || exit 1 CMD ["node","app.js"] Infrastructure-as-Code Scanner Infrastructure as Code (IaC) allows organizations to manage and provision infrastructure through code, bringing the benefits of version control and automation to the infrastructure layer. IaC scanning ensures that infrastructure code undergoes rigorous security controls, such as validating configurations, following best practices, scanning for security misconfigurations, and enforcing security policies throughout the infrastructure deployment process. Secrets Scanner A secret (e.g., API key, password, connection string for a database) should be stored in the source code (hard-coded), or in a configuration file stored within the code repository, because a hacker gaining access to the code could then access production and/or other critical environments. Secrets scanners can detect 150+ types of secrets that developers could leave in the code, and once a secret has been stored in the code (commit), it should be considered "compromised" and revoked immediately. Criteria to select the right third-party product: SAST Number of languages supported, ideally one tool for all code Accuracy of detection Dashboard to customize analysis with sets of rules SCA Number of packages recognized Automated remediation (can create a pull-request with the updated package) DAST Should be able to cover APIs as well as GUI apps Cover more than just OWASP IAST Capable of covering rich applications (e.g., with microservices) Offer remediation/advice to fix detected issues Container Scanner Up-to-date CVE database for the base image Can lint a Dockerfile and check best practices IaC Scanner Find issues in template files Supports the format of your cloud provider (e.g., ARM + Bicep for Azure, CloudFormation for AWS, Deployment Manager for Google Cloud) or Terraform if you are using it Secret scanner The number of credentials types recognized A dashboard that allows security teams to monitor detected secrets and ensure they have been revoked Custom rules to prevent false positives and/or add new formats Establishing Security Gates in CI/CD Pipeline Analysis tools are a good start, but they are useless if they are not part of a global governance. This governance must be built on well-defined security policies and on mandatory controls to ensure that the organization's data and systems are consistently protected against potential threats and vulnerabilities. Defining and Enforcing Security Policies Effective security in a CI/CD pipeline begins with the definition of clear and project-specific security policies. These policies should be tailored to the unique requirements and risks associated with each project. Whether it's compliance standards, data protection regulations, or industry-specific security measures (e.g., PCI DSS, HDS, FedRamp), organizations need to define and enforce policies that align with their security objectives. Once security policies are defined, automation plays a crucial role in their enforcement. Automated tools can scan code, infrastructure configurations, and deployment artifacts to ensure compliance with established security policies. This automation not only accelerates the security validation process but also reduces the likelihood of human error, ensuring consistent and reliable enforcement. Integration of Security Gates In the DevSecOps paradigm, the integration of security gates within the CI/CD pipeline is pivotal to ensuring that security measures are an inherent part of the software development lifecycle. If you set up security scans or controls that users can bypass, those methods become totally useless — you want them to become mandatory. Security gates act as checkpoints throughout the CI/CD pipeline, ensuring that each stage adheres to predefined security standards. By integrating automated security checks at key points, such as code commits, build processes, and deployment stages, organizations can identify and address security issues in a systematic and timely manner. These gated controls can be in different forms: Automated security controls (e.g., SAST, SCA, CredScan) Manual approval (e.g., code review) Manual testing (e.g., pen testing by specialized teams) Performance testing Quality (e.g., a query that monitors the number of defects opened in your quality tracking tool) Figure 2. Standard DevSecOps pipeline with gated security controls Continuous Monitoring and Feedback In the fast-paced world of software development, the importance of real-time monitoring for security and quick fixing cannot be overstated because even with gated controls, vulnerabilities can be found after an application has been deployed in production. Real-Time Monitoring for Security Real-time monitoring allows teams to proactively detect and respond to security threats as they emerge. By leveraging automated tools and advanced analytics, organizations can continuously monitor their applications, infrastructure, and networks for potential vulnerabilities or suspicious activities. This proactive approach not only enhances security but also minimizes the risk of security breaches and data compromises. It gives the ability to gain comprehensive visibility across the entire technology stack. DevSecOps teams can track and analyze security metrics at every layer, from application code to production environments. This visibility enables quick identification of security gaps and facilitates the implementation of targeted remediation measures, ensuring a robust defense against evolving cyber threats. Addressing Security Findings and Adapting Processes Identifying security findings is only the first step; effective DevSecOps requires a proactive approach to address and remediate these issues promptly. When security findings are identified, cross-functional teams work together to assess the impact, prioritize remediation tasks, and implement corrective measures. This collaborative effort ensures that security is everyone's responsibility and not just confined to a specific silo within the organization. Adaptability is a core tenet of DevSecOps. Organizations must foster a culture of continuous learning, where security teams regularly update their knowledge, processes, and tools based on evolving threats and industry best practices. This adaptive mindset ensures that security measures remain effective in the face of new challenges, and that the DevSecOps pipeline is continually refined for optimal security outcomes. Conclusion As software development processes continue to evolve, the need for robust security measures within the CI/CD pipeline becomes more critical. Embracing a DevSecOps approach can help organizations create a secure, efficient, and reliable CI/CD pipeline. By prioritizing security from the get-go and implementing security gates, organizations can save resources, reduce risk, and ultimately, deliver better, safer products to the market. Go and make security the foundation of your products! This is an excerpt from DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures.For more: Read the Report
Apostolos Giannakidis
Product Security,
Microsoft
Kellyn Gorman
Director of Data and AI,
Silk
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH