The final step in the SDLC, and arguably the most crucial, is the testing, deployment, and maintenance of development environments and applications. DZone's category for these SDLC stages serves as the pinnacle of application planning, design, and coding. The Zones in this category offer invaluable insights to help developers test, observe, deliver, deploy, and maintain their development and production environments.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
A developer's work is never truly finished once a feature or change is deployed. There is always a need for constant maintenance to ensure that a product or application continues to run as it should and is configured to scale. This Zone focuses on all your maintenance must-haves — from ensuring that your infrastructure is set up to manage various loads and improving software and data quality to tackling incident management, quality assurance, and more.
Modern systems span numerous architectures and technologies and are becoming exponentially more modular, dynamic, and distributed in nature. These complexities also pose new challenges for developers and SRE teams that are charged with ensuring the availability, reliability, and successful performance of their systems and infrastructure. Here, you will find resources about the tools, skills, and practices to implement for a strategic, holistic approach to system-wide observability and application monitoring.
The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
DevOps
The DevOps movement has paved the way for CI/CD and streamlined application delivery and release orchestration. These nuanced methodologies have not only increased the scale and speed at which we release software, but also redistributed responsibilities onto the developer and led to innovation and automation throughout the SDLC.DZone's 2023 DevOps: CI/CD, Application Delivery, and Release Orchestration Trend Report explores these derivatives of DevOps by diving into how AIOps and MLOps practices affect CI/CD, the proper way to build an effective CI/CD pipeline, strategies for source code management and branching for GitOps and CI/CD, and more. Our research builds on previous years with its focus on the challenges of CI/CD, a responsibility assessment, and the impact of release strategies, to name a few. The goal of this Trend Report is to provide developers with the information they need to further innovate on their integration and delivery pipelines.
Queuing Theory for Non-Functional Testing
Getting Started With OpenTelemetry
Unit testing is an essential practice in software development that involves testing individual codebase components to ensure they function correctly. In Spring-based applications, developers often use Aspect-Oriented Programming (AOP) to separate cross-cutting concerns, such as logging, from the core business logic, thus enabling modularization and cleaner code. However, testing aspects in Spring AOP pose unique challenges due to their interception-based nature. Developers need to employ appropriate strategies and best practices to facilitate effective unit testing of Spring AOP aspects. This comprehensive guide aims to provide developers with detailed and practical insights on effectively unit testing Spring AOP aspects. The guide covers various topics, including the basics of AOP, testing the pointcut expressions, testing around advice, testing before and after advice, testing after returning advice, testing after throwing advice, and testing introduction advice. Moreover, the guide provides sample Java code for each topic to help developers understand how to effectively apply the strategies and best practices. By following the guide's recommendations, developers can improve the quality of their Spring-based applications and ensure that their code is robust, reliable, and maintainable. Understanding Spring AOP Before implementing effective unit testing strategies, it is important to have a comprehensive understanding of Spring AOP. AOP, or Aspect-Oriented Programming, is a programming paradigm that enables the separation of cross-cutting concerns shared across different modules in an application. Spring AOP is a widely used aspect-oriented framework that is primarily implemented using runtime proxy-based mechanisms. The primary objective of Spring AOP is to provide modularity and flexibility in designing and implementing cross-cutting concerns in a Java-based application. The key concepts that one must understand in Spring AOP include: Aspect: An aspect is a module that encapsulates cross-cutting concerns that are applied across multiple objects in an application. Aspects are defined using aspects-oriented programming techniques and are typically independent of the application's core business logic. Join point: A join point is a point in the application's execution where the aspect can be applied. In Spring AOP, a join point can be a method execution, an exception handler, or a field access. Advice: Advice is an action that is taken when a join point is reached during the application's execution. In Spring AOP, advice can be applied before, after, or around a join point. Pointcut: A pointcut is a set of joint points where an aspect's advice should be applied. In Spring AOP, pointcuts are defined using expressions that specify the join points based on method signatures, annotations, or other criteria. By understanding these key concepts, developers can effectively design and implement cross-cutting concerns in a Java-based application using Spring AOP. Challenges in Testing Spring AOP Aspects Unit testing Spring AOP aspects can be challenging compared to testing regular Java classes, due to the unique nature of AOP aspects. Some of the key challenges include: Interception-based behavior: AOP aspects intercept method invocations or join points, which makes it difficult to test their behavior in isolation. To overcome this challenge, it is recommended to use mock objects to simulate the behavior of the intercepted objects. Dependency Injection: AOP aspects may rely on dependencies injected by the Spring container, which requires special handling during testing. It is important to ensure that these dependencies are properly mocked or stubbed to ensure that the aspect is being tested in isolation and not affected by other components. Dynamic proxying: Spring AOP relies on dynamic proxies, which makes it difficult to directly instantiate and test aspects. To overcome this challenge, it is recommended to use Spring's built-in support for creating and configuring dynamic proxies. Complex pointcut expressions: Pointcut expressions can be complex, making it challenging to ensure that advice is applied to the correct join points. To overcome this challenge, it is recommended to use a combination of unit tests and integration tests to ensure that the aspect is being applied correctly. Transaction management: AOP aspects may interact with transaction management, introducing additional complexity in testing. To overcome this challenge, it is recommended to use a combination of mock objects and integration tests to ensure that the aspect is working correctly within the context of the application. Despite these challenges, effective unit testing of Spring AOP aspects is crucial for ensuring the reliability, maintainability, and correctness of the application. By understanding these challenges and using the recommended testing approaches, developers can ensure that their AOP aspects are thoroughly tested and working as intended. Strategies for Unit Testing Spring AOP Aspects Unit testing Spring AOP Aspects can be challenging, given the system's complexity and the multiple pieces of advice involved. However, developers can use various strategies and best practices to overcome these challenges and ensure effective unit testing. One of the most crucial strategies is to isolate aspects from dependencies when writing unit tests. This isolation ensures that the tests focus solely on the aspect's behavior without interference from other modules. Developers can accomplish this by using mocking frameworks such as Mockito, EasyMock, or PowerMockito, which allow them to simulate dependencies' behavior and control the test environment. Another best practice is to test each piece of advice separately. AOP Aspects typically consist of multiple pieces of advice, such as "before," "after," or "around" advice. Testing each piece of advice separately ensures that the behavior of each piece of advice is correct and that it functions correctly in isolation. It's also essential to verify that the pointcut expressions are correctly configured and target the intended join points. Writing tests that exercise different scenarios helps ensure the correctness of point-cut expressions. Aspects in Spring-based applications often rely on beans managed by the ApplicationContext. Mocking the ApplicationContext allows developers to provide controlled dependencies to the aspect during testing, avoiding the need for a fully initialized Spring context. Developers should also define clear expectations for the behavior of the aspect and use assertions to verify that the aspect behaves as expected under different conditions. Assertions help ensure that the aspect's behavior aligns with the intended functionality. Finally, if aspects involve transaction management, developers should consider testing transactional behavior separately. This can be accomplished by mocking transaction managers or using in-memory databases to isolate the transactional aspect of the code for testing. By employing these strategies and best practices, developers can ensure effective unit testing of Spring AOP Aspects, resulting in robust and reliable systems. Sample Code: Testing a Logging Aspect To gain a better understanding of testing Spring AOP aspects, let's take a closer look at the sample code. We will analyze the testing process step-by-step, emphasizing important factors to take into account, and providing further information to ensure clarity. Let's assume that we will be writing unit tests for the following main class: Java import org.aspectj.lang.JoinPoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.springframework.stereotype.Component; @Aspect @Component public class LoggingAspect { @Before("execution(* com.example.service.*.*(..))") public void logBefore(JoinPoint joinPoint) { System.out.println("Logging before " + joinPoint.getSignature().getName()); } } The LoggingAspect class logs method executions with a single advice method, logBefore, which executes before methods in the com.example.service package. The LoggingAspectTest class contains unit tests for the LoggingAspect. Let's examine each part of the test method testLogBefore() in detail: Java import org.aspectj.lang.JoinPoint; import org.aspectj.lang.Signature; import org.junit.jupiter.api.Test; import static org.mockito.Mockito.*; public class LoggingAspectTest { @Test void testLogBefore() { // Given LoggingAspect loggingAspect = new LoggingAspect(); // Creating mock objects JoinPoint joinPoint = mock(JoinPoint.class); Signature signature = mock(Signature.class); // Configuring mock behavior when(joinPoint.getSignature()).thenReturn(signature); when(signature.getName()).thenReturn("methodName"); // When loggingAspect.logBefore(joinPoint); // Then // Verifying interactions with mock objects verify(joinPoint, times(1)).getSignature(); verify(signature, times(1)).getName(); // Additional assertions can be added to ensure correct logging behavior } } In the above code, there are several sections that play a vital role in testing. Firstly, the Given section sets up the test scenario. We do this by creating an instance of the LoggingAspect and mocking the JoinPoint and Signature objects. By doing so, we can control the behavior of these objects during testing. Next, we create mock objects for the JoinPoint and Signature using the Mockito mocking framework. This allows us to simulate behavior without invoking real instances, providing a controlled environment for testing. We then use Mockito's when() method to specify the behavior of the mock objects. For example, we define that when thegetSignature() method of the JoinPoint is called, it should return the mock Signature object we created earlier. In the When section, we invoke the logBefore() method of the LoggingAspect with the mocked JoinPoint. This simulates the execution of the advice before a method call, which triggers the logging behavior. Finally, we use Mockito's verify() method to assert that specific methods of the mocked objects were called during the execution of the advice. For example, we verify that the getSignature() and getName() methods were called once each. Although not demonstrated in this simplified example, additional assertions can be added to ensure the correctness of the aspect's behavior. For instance, we could assert that the logging message produced by the aspect matches the expected format and content. Additional Considerations Testing pointcut expressions: Pointcut expressions define where advice should be applied within the application. Writing tests to verify the correctness of pointcut expressions ensures that the advice is applied to the intended join points. Testing aspect behavior: Aspects may perform more complex actions beyond simple logging. Unit tests should cover all aspects of the aspect's behavior to ensure its correctness, including handling method parameters, logging additional information, or interacting with other components. Integration testing: While unit tests focus on isolating aspects, integration tests may be necessary to verify the interactions between aspects and other components of the application, such as service classes or controllers. By following these principles and best practices, developers can create thorough and reliable unit tests for Spring AOP aspects, ensuring the stability and maintainability of their applications. Conclusion Unit testing Spring AOP aspects is crucial for reliable and correct aspect-oriented code. To create robust tests, isolate aspects, use mocking frameworks, test each advice separately, verify pointcut expressions, and assert expected behavior. Sample code provided as a starting point for Java applications. With proper testing strategies in place, developers can confidently maintain and evolve AOP-based functionalities in their Spring app.
Hello! My name is Roman Burdiuzha. I am a Cloud Architect, Co-Founder, and CTO at Gart Solutions. I have been working in the IT industry for 15 years, a significant part of which has been in management positions. Today I will tell you how I find specialists for my DevSecOps and AppSec teams, what I pay attention to, and how I communicate with job seekers who try to embellish their own achievements during interviews. Starting Point I may surprise some of you, but first of all, I look for employees not on job boards, but in communities, in general chats for IT specialists, and through acquaintances. This way you can find a person with already existing recommendations and make a basic assessment of how suitable he is for you. Not by his resume, but by his real reputation. And you can already know him because you are spinning in the same community. Building the Ideal DevSecOps and AppSec Team: My Hiring Criteria There are general chats in my city (and not only) for IT specialists, where you can simply write: "Guys, hello, I'm doing this and I'm looking for cool specialists to work with me." Then I send the requirements that are currently relevant to me. If all this is not possible, I use the classic options with job boards. Before inviting for an interview, I first pay attention to the following points from the resume and recommendations. Programming Experience I am sure that any security professional in DevSecOps and AppSec must know the code. Ideally, all security professionals should grow out of programmers. You may disagree with me, but DevSecOps and AppSec specialists should work with code to one degree or another, be it some YAML manifests, JSON, various scripts, or just a classic application written in Java, Go, and so on. It is very wrong when a security professional does not know the language in which he is looking for vulnerabilities. You can't look at one line that the scanner highlighted and say: "Yes, indeed, this line is exploitable in this case, or it's false." You need to know the whole project and its structure. If you are not a programmer, you simply will not understand this code. Taking Initiative I want my future employees to be proactive — I mean people who work hard enough, do big tasks, have ambitions, want to achieve, and spend a lot of time on specific tasks. I support people's desire to develop in their field, to advance in the community, and to look for interesting tasks and projects for themselves, including outside of work. And if the resume indicates the corresponding points, I will definitely highlight it as a plus. Work-Life Balance I also pay a lot of attention to this point and I always talk about it during the interview. The presence of hobbies and interests in a person indicates his ability to switch from work to something else, his versatility and not being fixated on one job. It doesn't have to be about active sports, hiking, walking, etc. The main thing is that a person's life has not only work but also life itself. This means that he will not burn out in a couple of years of non-stop work. The ability to rest and be distracted acts as a guarantee of long-term employment relationships. In my experience, there have only been a couple of cases when employees had only work in their lives and nothing more. But I consider them to be unique people. They have been working in this rhythm for a long time, do not burn out, and do not fall into depression. You need to have a certain stamina and character for this. But in 99% of cases, overwork and inability to rest are a guaranteed departure and burnout of the employee in 2-3 years. At the moment, he can do a lot, but I don't need to change people like gloves every couple of years. Education I graduated from postgraduate studies myself, and I think this is more a plus than a minus. You should check the availability of certificates and diplomas of education specified in the resume. Confirmation of qualifications through certificates can indicate the veracity of the declared competencies. It is not easy to study for five years, but at the same time, when you study, you are forced to think in the right direction, analyze complex situations, and develop something that has scientific novelty at present and can be used in the future with benefit for people. And here, in principle, it is the same: you combine common ideas with colleagues and create, for example, progressive DevOps, which allows you to further help people; in particular, in the security of the banking sector. References and Recommendations I ask the applicant to provide contacts of previous employers or colleagues who can give recommendations on his work. If a person worked in the field of information security, then there are usually mutual acquaintances with whom I also communicate and who can confirm his qualifications. What I Look for in an Interview Unfortunately, not all aspects can be clarified at the stage of reading the resume. The applicant may hide some things in order to present themselves in a more favorable light, but more often it is simply impossible to take into account all the points needed by the employer when compiling a resume. Through leading questions in a conversation with the applicant and his stories from previous jobs, I find out if the potential employee has the qualities listed below. Ability To Read It sounds funny, but in fact, it is not such a common quality. A person who can read and analyze can solve almost any problem. I am absolutely convinced of this because I have gone through it myself more than once. Now I try to look for information from many sources, I actively use the same ChatGPT and other similar services just to speed up the work. That is, the more information I push through myself, the more tasks I will solve, and, accordingly, I will be more successful. Sometimes I ask the candidate to find a solution to a complex problem online and provide him with material for analysis, I look at how quickly he can read and conduct a qualitative analysis of the provided article. Analytical Mind There are two processes: decomposition and composition. Programmers usually use the second part. They conduct compositional analysis, that is, they assemble some artifact from the code that is needed for further work. An information security analyst or security specialist uses decomposition. That is, on the contrary, it disassembles the artifact into its components and looks for vulnerabilities. If a programmer creates, then a security specialist disassembles. An analytical mind is needed in the part that is related to how someone else's code works. In the 90s, for example, we talked about disassembling if the code was written in assembler. That is, you have a binary file, and you need to understand how it works. And if you do not analyze all entry and exit points, all processes, and functions that the programmer has developed in this code, then you cannot be sure that the program works as intended. There can be many pitfalls and logical things related to the correct or incorrect operation of the program. For example, there is a function that can be passed a certain amount of data. The programmer can consider this function as some input numerical data that can be passed to it, or this data can be limited by some sequence or length. For example, we enter the card number. It seems like the card number has a certain length. But, at the same time, any analyst and you should understand that instead of a number there can be letters or special characters, and the length may not be the same as the programmer came up with. This also needs to be checked, and all hypotheses need to be analyzed, to look at everything much wider than what is embedded in the business logic and thinking of the programmer who wrote it all. How do you understand that the candidate has an analytical mind? All this is easily clarified at the stage of "talking" with the candidate. You can simply ask questions like: "There is a data sample for process X, which consists of 1000 parameters. You need to determine the most important 30. The analysis task will be solved by 3 groups of analysts. How will you divide these parameters to obtain high efficiency and reliability of the analysis?" Experience Working in a Critical Situation It is desirable that the applicant has experience working in a crunch; for example, if he worked with servers with some kind of large critical load and was on duty. Usually, these are night shifts, evening shifts, on a weekend, when you have to urgently raise and restore something. Such people are very valuable. They really know how to work and have personally gone through different "pains." They are ready to put out fires with you and, most importantly, are highly likely to be more careful than others. I worked for a company that had a lot of students without experience. They very often broke a lot of things, and after that, it was necessary to raise all this. This is, of course, partly a consequence of mentoring. You have to help, develop, and turn students into specialists, but this does not negate the "pain" of correcting mistakes. And until you go through all this with them, they do not become cool. If a person participated in these processes and had the strength and ability to raise and correct, this is very cool. You need to select and take such people for yourself because they clearly know how to work. How To Avoid Being Fooled by Job Seekers Job seekers may overstate their achievements, but this is fairly easy to verify. If a person has the necessary experience, you need to ask them practical questions that are difficult to answer without real experience. For example, I ask about the implementation of a particular practice from DevSecOps, that is, what orchestrator he worked in. In a few words, the applicant should write, for example, a job in which it was all performed, and what tool he used. You can even suggest some keys from this vulnerability scanner and ask what keys and in what aspect you would use to make everything work. Only a specialist who has worked with this can answer these questions. In my opinion, this is the best way to check a person. That is, you need to give small practical tasks that can be solved quickly. It happens that not all applicants have worked and are working with the same as me, and they may have more experience and knowledge. Then it makes sense to find some common questions and points of contact with which we worked together. For example, just list 20 things from the field of information security and ask what the applicant is familiar with, find common points of interest, and then go through them in detail. When an applicant brags about having developments in interviews, it is also better to ask specific questions. If a person tells without hesitation what he has implemented, you can additionally ask him some small details about each item and direction. For example, how did you implement SAST verification, and with what tools? If he tells in detail and, possibly, with some additional nuances related to the settings of a particular scanner, and this fits into the general concept, then the person lived by this and used what he is talking about. Wrapping Up These are all the points that I pay attention to when looking for new people. I hope this information will be useful both for my Team Lead colleagues and for job seekers who will know what qualities they need to develop to successfully pass the interview.
Wireshark, the free, open-source packet sniffer and network protocol analyzer, has cemented itself as an indispensable tool in network troubleshooting, analysis, and security (on both sides). This article delves into the features, uses, and practical tips for harnessing the full potential of Wireshark, expanding on aspects that may have been glossed over in discussions or demonstrations. Whether you're a developer, security expert, or just curious about network operations, this guide will enhance your understanding of Wireshark and its applications. Introduction to Wireshark Wireshark was initially developed by Eric Rescorla and Gerald Combs, and designed to capture and analyze network packets in real-time. Its capabilities extend across various network interfaces and protocols, making it a versatile tool for anyone involved in networking. Unlike its command-line counterpart, tcpdump, Wireshark's graphical interface simplifies the analysis process, presenting data in a user-friendly "proto view" that organizes packets in a hierarchical structure. This facilitates quick identification of protocols, ports, and data flows. The key features of Wireshark are: Graphical User Interface (GUI): Eases the analysis of network packets compared to command-line tools Proto view: Displays packet data in a tree structure, simplifying protocol and port identification Compatibility: Supports a wide range of network interfaces and protocols Browser Network Monitors FireFox and Chrome contain a far superior network monitor tool built into them. It is superior because it is simpler to use and works with secure websites out of the box. If you can use the browser to debug the network traffic you should do that. In cases where your traffic requires low-level protocol information or is outside of the browser, Wireshark is the next best thing. Installation and Getting Started To begin with Wireshark, visit their official website for the download. The installation process is straightforward, but attention should be paid to the installation of command-line tools, which may require separate steps. Upon launching Wireshark, users are greeted with a selection of network interfaces as seen below. Choosing the correct interface, such as the loopback for local server debugging, is crucial for capturing relevant data. When debugging a Local Server (localhost), use the loopback interface. Remote servers will probably fit with the en0 network adapter. You can use the activity graph next to the network adapter to identify active interfaces for capture. Navigating Through Noise With Filters One of the challenges of using Wireshark is the overwhelming amount of data captured, including irrelevant "background noise" as seen in the following image. Wireshark addresses this with powerful display filters, allowing users to hone in on specific ports, protocols, or data types. For instance, filtering TCP traffic on port 8080 can significantly reduce unrelated data, making it easier to debug specific issues. Notice that there is a completion widget on top of the Wireshark UI that lets you find out the values more easily. In this case, we filter by port tcp.port == 8080 which is the port used typically in Java servers (e.g., Spring Boot/tomcat). But this isn't enough as HTTP is more concise. We can filter by protocol by adding http to the filter which narrows the view to HTTP requests and responses as shown in the following image. Deep Dive Into Data Analysis Wireshark excels in its ability to dissect and present network data in an accessible manner. For example, HTTP responses carrying JSON data are automatically parsed and displayed in a readable tree structure as seen below. This feature is invaluable for developers and analysts, providing insights into the data exchanged between clients and servers without manual decoding. Wireshark parses and displays JSON data within the packet analysis pane. It offers both hexadecimal and ASCII views for raw packet data. Beyond Basic Usage While Wireshark's basic functionalities cater to a wide range of networking tasks, its true strength lies in advanced features such as ethernet network analysis, HTTPS decryption, and debugging across devices. These tasks, however, may involve complex configuration steps and a deeper understanding of network protocols and security measures. There are two big challenges when working with Wireshark: HTTPS decryption: Decrypting HTTPS traffic requires additional configuration but offers visibility into secure communications. Device debugging: Wireshark can be used to troubleshoot network issues on various devices, requiring specific knowledge of network configurations. The Basics of HTTPS Encryption HTTPS uses the Transport Layer Security (TLS) or its predecessor, Secure Sockets Layer (SSL), to encrypt data. This encryption mechanism ensures that any data transferred between the web server and the browser remains confidential and untouched. The process involves a series of steps including handshake, data encryption, and data integrity checks. Decrypting HTTPS traffic is often necessary for developers and network administrators to troubleshoot communication errors, analyze application performance, or ensure that sensitive data is correctly encrypted before transmission. It's a powerful capability in diagnosing complex issues that cannot be resolved by simply inspecting unencrypted traffic or server logs. Methods for Decrypting HTTPS in Wireshark Important: Decrypting HTTPS traffic should only be done on networks and systems you own or have explicit permission to analyze. Unauthorized decryption of network traffic can violate privacy laws and ethical standards. Pre-Master Secret Key Logging One common method involves using the pre-master secret key to decrypt HTTPS traffic. Browsers like Firefox and Chrome can log the pre-master secret keys to a file when configured to do so. Wireshark can then use this file to decrypt the traffic: Configure the browser: Set an environment variable (SSLKEYLOGFILE) to specify a file where the browser will save the encryption keys. Capture traffic: Use Wireshark to capture the traffic as usual. Decrypt the traffic: Point Wireshark to the file with the pre-master secret keys (through Wireshark's preferences) to decrypt the captured HTTPS traffic. Using a Proxy Another approach involves routing traffic through a proxy server that decrypts HTTPS traffic and then re-encrypts it before sending it to the destination. This method might require setting up a dedicated decryption proxy that can handle the TLS encryption/decryption: Set up a decryption proxy: Tools like Mitmproxy or Burp Suite can act as an intermediary that decrypts and logs HTTPS traffic. Configure network to route through proxy: Ensure the client's network settings route traffic through the proxy. Inspect Traffic: Use the proxy's tools to inspect the decrypted traffic directly. Integrating tcpdump With Wireshark for Enhanced Network Analysis While Wireshark offers a graphical interface for analyzing network packets, there are scenarios where using it directly may not be feasible due to security policies or operational constraints. tcpdump, a powerful command-line packet analyzer, becomes invaluable in these situations, providing a flexible and less intrusive means of capturing network traffic. The Role of tcpdump in Network Troubleshooting tcpdump allows for the capture of network packets without a graphical user interface, making it ideal for use in environments with strict security requirements or limited resources. It operates under the principle of capturing network traffic to a file, which can then be analyzed at a later time or on a different machine using Wireshark. Key Scenarios for tcpdump Usage High-security environments: In places like banks or government institutions where running network sniffers might pose a security risk, tcpdump offers a less intrusive alternative. Remote servers: Debugging issues on a cloud server can be challenging with Wireshark due to the graphical interface; tcpdump captures can be transferred and analyzed locally. Security-conscious customers: Customers may be hesitant to allow third-party tools to run on their systems; tcpdump's command-line operation is often more palatable. Using tcpdump Effectively Capturing traffic with tcpdump involves specifying the network interface and an output file for the capture. This process is straightforward but powerful, allowing for detailed analysis of network interactions: Command syntax: The basic command structure for initiating a capture involves specifying the network interface (e.g., en0 for wireless connections) and the output file name. Execution: Once the command is run, tcpdump silently captures network packets. The capture continues until it's manually stopped, at which point the captured data can be saved to the specified file. Opening captures in Wireshark: The file generated by tcpdump can be opened in Wireshark for detailed analysis, utilizing Wireshark's advanced features for dissecting and understanding network traffic. The following shows the tcpdump command and its output: $ sudo tcpdump -i en0 -w output Password: tcpdump: listening on en, link-type EN10MB (Ethernet), capture size 262144 bytes ^C3845 packets captured 4189 packets received by filter 0 packets dropped by kernel Challenges and Considerations Identifying the correct network interface for capture on remote systems might require additional steps, such as using the ifconfig command to list available interfaces. This step is crucial for ensuring that relevant traffic is captured for analysis. Final Word Wireshark stands out as a powerful tool for network analysis, offering deep insights into network traffic and protocols. Whether it's for low-level networking work, security analysis, or application development, Wireshark's features and capabilities make it an essential tool in the tech arsenal. With practice and exploration, users can leverage Wireshark to uncover detailed information about their networks, troubleshoot complex issues, and secure their environments more effectively. Wireshark's blend of ease of use with profound analytical depth ensures it remains a go-to solution for networking professionals across the spectrum. Its continuous development and wide-ranging applicability underscore its position as a cornerstone in the field of network analysis. Combining tcpdump's capabilities for capturing network traffic with Wireshark's analytical prowess offers a comprehensive solution for network troubleshooting and analysis. This combination is particularly useful in environments where direct use of Wireshark is not possible or ideal. While both tools possess a steep learning curve due to their powerful and complex features, they collectively form an indispensable toolkit for network administrators, security professionals, and developers alike. This integrated approach not only addresses the challenges of capturing and analyzing network traffic in various operational contexts but also highlights the versatility and depth of tools available for understanding and securing modern networks. Videos Wireshark tcpdump
In an era where the pace of software development and deployment is accelerating, the significance of having a robust and integrated DevOps environment cannot be overstated. Azure DevOps, Microsoft's suite of cloud-based DevOps services, is designed to support teams in planning work, collaborating on code development, and building and deploying applications with greater efficiency and reduced lead times. The objective of this blog post is twofold: first, to introduce Azure DevOps, shedding light on its components and how they converge to form a powerful DevOps ecosystem, and second, to provide a balanced perspective by delving into the advantages and potential drawbacks of adopting Azure DevOps. Whether you're contemplating the integration of Azure DevOps into your workflow or seeking to optimize your current DevOps practices, this post aims to equip you with a thorough understanding of what Azure DevOps has to offer, helping you make an informed decision tailored to your organization's unique requirements. What Is Azure DevOps? Azure DevOps represents the evolution of Visual Studio Team Services, capturing over 20 years of investment and learning in providing tools to support software development teams. As a cornerstone in the realm of DevOps solutions, Azure DevOps offers a suite of tools catering to the diverse needs of software development teams. Microsoft provides this product in the Cloud with Azure DevOps Services or on-premises with Azure DevOps Server. It offers integrated features accessible through a web browser or IDE client. At its core, Azure DevOps comprises five key components, each designed to address specific aspects of the development process. These components are not only powerful in isolation but also offer enhanced benefits when used together, creating a seamless and integrated experience for users. Azure Boards It offers teams a comprehensive solution for project management, including agile planning, work item tracking, and visualization tools. It enables teams to plan sprints, track work with Kanban boards, and use dashboards to gain insights into their projects. This component fosters enhanced collaboration and transparency, allowing teams to stay aligned on goals and progress. Azure Repos It is a set of version control tools designed to manage code efficiently. It provides Git (distributed version control) or Team Foundation Version Control (centralized version control) for source code management. Developers can collaborate on code, manage branches, and track version history with complete traceability. This component ensures streamlined and accessible code management, allowing teams to focus on building rather than merely managing their codebase. Azure Pipelines Azure Pipelines automates the stages of the application's lifecycle, from continuous integration and continuous delivery to continuous testing, build, and deployment. It supports any language, platform, and cloud, offering a flexible solution for deploying code to multiple targets such as virtual machines, various environments, containers, on-premises, or PaaS services. With Azure Pipelines, teams can ensure that code changes are automatically built, tested, and deployed, facilitating faster and more reliable software releases. Azure Test Plans Azure Test Plans provide a suite of tools for test management, enabling teams to plan and execute manual, exploratory, and automated testing within their CI/CD pipelines. Furthermore, Azure Test Plans ensure end-to-end traceability by linking test cases and suites to user stories, features, or requirements. They facilitate comprehensive reporting and analysis through configurable tracking charts, test-specific widgets, and built-in reports, empowering teams with actionable insights for continuous improvement. Thus providing a framework for rigorous testing to ensure that applications meet the highest standards before release. Azure Artifacts It allows teams to manage and share software packages and dependencies across the development lifecycle, offering a streamlined approach to package management. This feature supports various package formats, including npm, NuGet, Python, Cargo, Maven, and Universal Packages, fostering efficient development processes. This service not only accelerates development cycles but also enhances reliability and reproducibility by providing a reliable source for package distribution and version control, ultimately empowering teams to deliver high-quality software products with confidence. Below is an example of architecture leveraging various Azure DevOps services: Image captured from Microsoft Benefits of Leveraging Azure DevOps Azure DevOps presents a compelling array of benefits that cater to the multifaceted demands of modern software development teams. Its comprehensive suite of tools is designed to streamline and optimize various stages of the development lifecycle, fostering efficiency, collaboration, and quality. Here are some of the key advantages: Seamless Integration One of Azure DevOps' standout features is its ability to seamlessly integrate with a plethora of tools and platforms, whether they are from Microsoft or other vendors. This interoperability is crucial for anyone who uses a diverse set of tools in their development processes. Scalability and Flexibility Azure DevOps is engineered to scale alongside your business. Whether you're working on small projects or large enterprise-level solutions, Azure DevOps can handle the load, providing the same level of performance and reliability. This scalability is a vital attribute for enterprises that foresee growth or experience fluctuating demands. Enhanced Collaboration and Visibility Collaboration is at the heart of Azure DevOps. With features like Azure Boards, teams can have a centralized view of their projects, track progress, and coordinate efforts efficiently. This visibility is essential for aligning cross-functional teams, managing dependencies, and ensuring that everyone is on the same page. Continuous Integration and Deployment (CI/CD) Azure Pipelines provides robust CI/CD capabilities, enabling teams to automate the building, testing, and deployment of their applications. This automation is crucial to accelerate their time-to-market and improve the quality of their software. By automating these processes, teams can detect and address issues early, reduce manual errors, and ensure that the software is always in a deployable state, thereby enhancing operational efficiency and software reliability. Drawbacks of Azure DevOps While Azure DevOps offers a host of benefits, it's essential to acknowledge and understand its potential drawbacks. Like any tool or platform, it may not be the perfect fit for every organization or scenario. Here are some of the disadvantages that one might encounter: Vendor Lock-In By adopting Azure DevOps services for project management, version control, continuous integration, and deployment, organizations may find themselves tightly integrated into the Microsoft ecosystem. This dependency could limit flexibility and increase reliance on Microsoft's tools and services, making it challenging to transition to alternative platforms or technologies in the future. Integration Challenges Although Azure DevOps boasts impressive integration capabilities, there can be challenges when interfacing with certain non-Microsoft or legacy systems. Some integrations may require additional customization or the use of third-party tools, potentially leading to increased complexity and maintenance overhead. For organizations heavily reliant on non-Microsoft products, this could pose integration and workflow continuity challenges. Cost Considerations Azure DevOps operates on a subscription-based pricing model, which, while flexible, can become significant at scale, especially for larger teams or enterprises with extensive requirements. The cost can escalate based on the number of users, the level of access needed, and the use of additional features and services. For smaller teams or startups, the pricing may be a considerable factor when deciding whether Azure DevOps is the right solution for their needs. Potential for Over-Complexity With its myriad of features and tools, there's a risk of over-complicating workflows and processes within Azure DevOps. Teams may find themselves navigating through a plethora of options and configurations, which, if not properly managed, can lead to inefficiency rather than improved productivity. Organizations must strike a balance between leveraging Azure DevOps' capabilities and maintaining simplicity and clarity in their processes. While these disadvantages are noteworthy, they do not necessarily diminish the overall value that Azure DevOps can provide to an organization. It's crucial for enterprises and organizations to carefully assess their specific needs, resources, and constraints when considering Azure DevOps as their solution. By acknowledging these potential drawbacks, organizations can plan effectively, ensuring that their adoption of Azure DevOps is strategic, well-informed, and aligned with their operational goals and challenges. Conclusion In the landscape of modern software development, Azure DevOps stands out as a robust and comprehensive platform, offering a suite of tools designed to enhance and streamline the DevOps process. Its integration capabilities, scalability, and extensive features make it an attractive choice for any organization or enterprise. However, like any sophisticated platform, Azure DevOps comes with its own set of challenges and considerations. The vendor lock-in, integration complexities, cost factors, and potential for over-complexity are aspects that organizations need to weigh carefully. It's crucial for enterprises to undertake a thorough analysis of their specific needs, resources, and constraints when evaluating Azure DevOps as a solution. The decision to adopt Azure DevOps should be guided by a strategic assessment of how well its advantages align with the organization's goals and how its disadvantages might impact operations. For many enterprises, the benefits of streamlined workflows, enhanced collaboration, and improved efficiency will outweigh the drawbacks, particularly when the adoption is well-planned and aligned with the organization's objectives.
Parameterized tests allow developers to efficiently test their code with a range of input values. In the realm of JUnit testing, seasoned users have long grappled with the complexities of implementing these tests. But with the release of JUnit 5.7, a new era of test parameterization enters, offering developers first-class support and enhanced capabilities. Let's delve into the exciting possibilities that JUnit 5.7 brings to the table for parameterized testing! Parameterization Samples From JUnit 5.7 Docs Let's see some examples from the docs: Java @ParameterizedTest @ValueSource(strings = { "racecar", "radar", "able was I ere I saw elba" }) void palindromes(String candidate) { assertTrue(StringUtils.isPalindrome(candidate)); } @ParameterizedTest @CsvSource({ "apple, 1", "banana, 2", "'lemon, lime', 0xF1", "strawberry, 700_000" }) void testWithCsvSource(String fruit, int rank) { assertNotNull(fruit); assertNotEquals(0, rank); } @ParameterizedTest @MethodSource("stringIntAndListProvider") void testWithMultiArgMethodSource(String str, int num, List<String> list) { assertEquals(5, str.length()); assertTrue(num >=1 && num <=2); assertEquals(2, list.size()); } static Stream<Arguments> stringIntAndListProvider() { return Stream.of( arguments("apple", 1, Arrays.asList("a", "b")), arguments("lemon", 2, Arrays.asList("x", "y")) ); } The @ParameterizedTest annotation has to be accompanied by one of several provided source annotations describing where to take the parameters from. The source of the parameters is often referred to as the "data provider." I will not dive into their detailed description here: the JUnit user guide does it better than I could, but allow me to share several observations: The @ValueSource is limited to providing a single parameter value only. In other words, the test method cannot have more than one argument, and the types one can use are restricted as well. Passing multiple arguments is somewhat addressed by @CsvSource, parsing each string into a record that is then passed as arguments field-by-field. This can easily get hard to read with long strings and/or plentiful arguments. The types one can use are also restricted — more on this later. All the sources that declare the actual values in annotations are restricted to values that are compile-time constants (limitation of Java annotations, not JUnit). @MethodSource and @ArgumentsSource provides a stream/collection of (un-typed) n-tuples that are then passed as method arguments. Various actual types are supported to represent the sequence of n-tuples, but none of them guarantee that they will fit the method's argument list. This kind of source requires additional methods or classes, but it provides no restriction on where and how to obtain the test data. As you can see, the source types available range from the simple ones (simple to use, but limited in functionality) to the ultimately flexible ones that require more code to get working. Sidenote — This is generally a sign of good design: a little code is needed for essential functionality, and adding extra complexity is justified when used to enable a more demanding use case. What does not seem to fit this hypothetical simple-to-flexible continuum, is @EnumSource. Take a look at this non-trivial example of four parameter sets with 2 values each. Note — While @EnumSource passes the enum's value as a single test method parameter, conceptually, the test is parameterized by enum's fields, that poses no restriction on the number of parameters. Java enum Direction { UP(0, '^'), RIGHT(90, '>'), DOWN(180, 'v'), LEFT(270, '<'); private final int degrees; private final char ch; Direction(int degrees, char ch) { this.degrees = degrees; this.ch = ch; } } @ParameterizedTest @EnumSource void direction(Direction dir) { assertEquals(0, dir.degrees % 90); assertFalse(Character.isWhitespace(dir.ch)); int orientation = player.getOrientation(); player.turn(dir); assertEquals((orientation + dir.degrees) % 360, player.getOrientation()); } Just think of it: the hardcoded list of values restricts its flexibility severely (no external or generated data), while the amount of additional code needed to declare the enum makes this quite a verbose alternative over, say, @CsvSource. But that is just a first impression. We will see how elegant this can get when leveraging the true power of Java enums. Sidenote: This article does not address the verification of enums that are part of your production code. Those, of course, had to be declared no matter how you choose to verify them. Instead, it focuses on when and how to express your test data in the form of enums. When To Use It There are situations when enums perform better than the alternatives: Multiple Parameters per Test When all you need is a single parameter, you likely do not want to complicate things beyond @ValueSource. But as soon as you need multiple -— say, inputs and expected results — you have to resort to @CsvSource, @MethodSource/@ArgumentsSource or @EnumSource. In a way, enum lets you "smuggle in" any number of data fields. So when you need to add more test method parameters in the future, you simply add more fields in your existing enums, leaving the test method signatures untouched. This becomes priceless when you reuse your data provider in multiple tests. For other sources, one has to employ ArgumentsAccessors or ArgumentsAggregators for the flexibility that enums have out of the box. Type Safety For Java developers, this should be a big one. Parameters read from CSV (files or literals), @MethodSource or @ArgumentsSource, they provide no compile-time guarantee that the parameter count, and their types, are going to match the signature. Obviously, JUnit is going to complain at runtime but forget about any code assistance from your IDE. Same as before, this adds up when you reuse the same parameters for multiple tests. Using a type-safe approach would be a huge win when extending the parameter set in the future. Custom Types This is mostly an advantage over text-based sources, such as the ones reading data from CSV — the values encoded in the text need to be converted to Java types. If you have a custom class to instantiate from the CSV record, you can do it using ArgumentsAggregator. However, your data declaration is still not type-safe — any mismatch between the method signature and declared data will pop up in runtime when "aggregating" arguments. Not to mention that declaring the aggregator class adds more support code needed for your parameterization to work. And we ever favored @CsvSource over @EnumSource to avoid the extra code. Documentable Unlike the other methods, the enum source has Java symbols for both parameter sets (enum instances) and all parameters they contain (enum fields). They provide a straightforward place where to attach documentation in its more natural form — the JavaDoc. It is not that documentation cannot be placed elsewhere, but it will be — by definition — placed further from what it documents and thus be harder to find, and easier to become outdated. But There Is More! Now: Enums. Are. Classes. It feels that many junior developers are yet to realize how powerful Java enums truly are. In other programming languages, they really are just glorified constants. But in Java, they are convenient little implementations of a Flyweight design pattern with (much of the) advantages of full-blown classes. Why is that a good thing? Test Fixture-Related Behavior As with any other class, enums can have methods added to them. This becomes handy if enum test parameters are reused between tests — same data, just tested a little differently. To effectively work with the parameters without significant copy and paste, some helper code needs to be shared between those tests as well. It is not something a helper class and a few static methods would not "solve." Sidenote: Notice that such design suffers from a Feature Envy. Test methods — or worse, helper class methods — would have to pull the data out of the enum objects to perform actions on that data. While this is the (only) way in procedural programming, in the object-oriented world, we can do better. Declaring the "helper" methods right in the enum declaration itself, we would move the code where the data is. Or, to put in OOP lingo, the helper methods would become the "behavior" of the test fixtures implemented as enums. This would not only make the code more idiomatic (calling sensible methods on instances over static methods passing data around), but it would also make it easier to reuse enum parameters across test cases. Inheritance Enums can implement interfaces with (default) methods. When used sensibly, this can be leveraged to share behavior between several data providers — several enums. An example that easily comes to mind is separate enums for positive and negative tests. If they represent a similar kind of test fixture, chances are they have some behavior to share. The Talk Is Cheap Let's illustrate this on a test suite of a hypothetical convertor of source code files, not quite unlike the one performing Python 2 to 3 conversion. To have real confidence in what such a comprehensive tool does, one would end up with an extensive set of input files manifesting various aspects of the language, and matching files to compare the conversion result against. Except for that, it is needed to verify what warnings/errors are served to the user for problematic inputs. This is a natural fit for parameterized tests due to the large number of samples to verify, but it does not quite fit any of the simple JUnit parameter sources, as the data are somewhat complex.See below: Java enum Conversion { CLEAN("imports-correct.2.py", "imports-correct.3.py", Set.of()), WARNINGS("problematic.2.py", "problematic.3.py", Set.of( "Using module 'xyz' that is deprecated" )), SYNTAX_ERROR("syntax-error.py", new RuntimeException("Syntax error on line 17")); // Many, many others ... @Nonnull final String inFile; @CheckForNull final String expectedOutput; @CheckForNull final Exception expectedException; @Nonnull final Set<String> expectedWarnings; Conversion(@Nonnull String inFile, @Nonnull String expectedOutput, @NotNull Set<String> expectedWarnings) { this(inFile, expectedOutput, null, expectedWarnings); } Conversion(@Nonnull String inFile, @Nonnull Exception expectedException) { this(inFile, null, expectedException, Set.of()); } Conversion(@Nonnull String inFile, String expectedOutput, Exception expectedException, @Nonnull Set<String> expectedWarnings) { this.inFile = inFile; this.expectedOutput = expectedOutput; this.expectedException = expectedException; this.expectedWarnings = expectedWarnings; } public File getV2File() { ... } public File getV3File() { ... } } @ParameterizedTest @EnumSource void upgrade(Conversion con) { try { File actual = convert(con.getV2File()); if (con.expectedException != null) { fail("No exception thrown when one was expected", con.expectedException); } assertEquals(con.expectedWarnings, getLoggedWarnings()); new FileAssert(actual).isEqualTo(con.getV3File()); } catch (Exception ex) { assertTypeAndMessageEquals(con.expectedException, ex); } } The usage of enums does not restrict us in how complex the data can be. As you can see, we can define several convenient constructors in the enums, so declaring new parameter sets is nice and clean. This prevents the usage of long argument lists that often end up filled with many "empty" values (nulls, empty strings, or collections) that leave one wondering what argument #7 — you know, one of the nulls — actually represents. Notice how enums enable the use of complex types (Set, RuntimeException) with no restrictions or magical conversions. Passing such data is also completely type-safe. Now, I know what you think. This is awfully wordy. Well, up to a point. Realistically, you are going to have a lot more data samples to verify, so the amount of the boilerplate code will be less significant in comparison. Also, see how related tests can be written leveraging the same enums, and their helper methods: Java @ParameterizedTest @EnumSource // Upgrading files already upgraded always passes, makes no changes, issues no warnings. void upgradeFromV3toV3AlwaysPasses(Conversion con) throws Exception { File actual = convert(con.getV3File()); assertEquals(Set.of(), getLoggedWarnings()); new FileAssert(actual).isEqualTo(con.getV3File()); } @ParameterizedTest @EnumSource // Downgrading files created by upgrade procedure is expected to always pass without warnings. void downgrade(Conversion con) throws Exception { File actual = convert(con.getV3File()); assertEquals(Set.of(), getLoggedWarnings()); new FileAssert(actual).isEqualTo(con.getV2File()); } Some More Talk After All Conceptually, @EnumSourceencourages you to create a complex, machine-readable description of individual test scenarios, blurring the line between data providers and test fixtures. One other great thing about having each data set expressed as a Java symbol (enum element) is that they can be used individually; completely out of data providers/parameterized tests. Since they have a reasonable name and they are self-contained (in terms of data and behavior), they contribute to nice and readable tests. Java @Test void warnWhenNoEventsReported() throws Exception { FixtureXmls.Invalid events = FixtureXmls.Invalid.NO_EVENTS_REPORTED; // read() is a helper method that is shared by all FixtureXmls try (InputStream is = events.read()) { EventList el = consume(is); assertEquals(Set.of(...), el.getWarnings()); } } Now, @EnumSource is not going to be one of your most frequently used argument sources, and that is a good thing, as overusing it would do no good. But in the right circumstances, it comes in handy to know how to use all they have to offer.
Brief Problem Description Imagine the situation: you (a Python developer) start a new job or join a new project, and you are told that the documentation is not up to date or is even absent, and those, who wrote the code, resigned a long time ago. Moreover, the code is written in a language that you are not familiar with (or “that you do not know”). You open the code, start examining it, and realize that there are no tests either. Also, the service has been working on Prod for so long that you are afraid to change something. I am not talking about any particular project or company. I have experienced this at least three times. Black Box Well, you have a black box that has API methods (judging by the code) and you know that it pulls something and writes to a database. There is also documentation for those services that are receiving requests. The advantages include the fact that it starts there is documentation on the API that it pulls, and the service code is quite readable. As for the disadvantages, it wants to get something via API. Something can be run in a container, and something can be used from a developer environment, but not everything. Another problem is that requests to the black box are encrypted and signed as well as requests from it to some other services. At the same time, you need to change something in this service and not break what is working. In such cases, Postman or cURL is inconvenient to use. You need to prepare each request in each specific case since there are dynamic input data and signatures that depend on the time of the request.There are almost no ready-made tests, and it is difficult to write them if you do not know the language very well. The market offers solutions that allow you to run tests in such a service. However, I have never used them, so trying to understand them would be more difficult and would take much more time than creating my own solution. Created Solution I have come up with a simple and convenient option. I have written a simple script in Python that will pull this very application. I used requests and a simple signature that I created very quickly for the requests prepared in advance.Next, I needed to mock backends. First Option To do this, I just ran a mock service in Python. In my case, Django turned out to be the fastest and easiest tool for this. I decided to implement everything as simply and quickly as possible and used the latest version of Django. The result was quite good, but it was only one method and it took me several hours to use despite the fact that I wanted to save time. There are dozens of such methods. Examples of Configuration Files In the end, I got rid of everything I did not need and simply generated JSON with requests and responses. I described each request from the front end of my application, the expected response of the service to which requests were sent, as well as the rules for checking the response to the main request.For each method, I wrote a separate URL. However, manually changing the responses of one method from correct to incorrect and vice versa and then pulling each method is difficult and time-consuming. JSON { "id": 308, "front": { "method": "/method1", "request": { "method": "POST", "data": { "from_date": "dfsdsf", "some_type": "dfsdsf", "goods": [ { "price": "112323", "name": "123123", "quantity": 1 } ], "total_amount": "2113213" } }, "response": { "code": 200, "body": { "status": "OK", "data": { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } } } }, "backend": { "response": { "code": 200, "method": "POST", "data": { "body": { "status": 1, "data": { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } } } } } } Second Option Then I linked mock objects to the script. As a result, it appeared that there is a script call that pulls my application and there is a mock object that responds to all its requests. The script saves the ID of the selected request, and the mock object generates a response based on this ID. Thus, I collected all requests in different options: correct and with errors. What I Got As a result, I got a simple view with one function for all URLs. This function takes a certain request identifier and, based on it, looks for the response rules — a mock object. In the meantime, the script that pulls the service before the request writes this very request identifier to the storage. This script simply takes each case in turn, writes an identifier, and makes the correct request, then it checks if the response is correct, and that's it. Intermediate Connections However, I needed not only to generate responses to these requests but also to test requests to mock objects. After all, the service could send an incorrect request, so it was necessary to check them too. As a result, there was a huge number of configuration files, and my several API methods turned into hundreds of large configuration files for checking. Connecting Database I decided to transfer everything to a database. My service began to write not only to the console but also to the database so that it would be possible to generate reports. That appeared to be more convenient: each case had its own entry in the database. Cases are combined into projects and have flags that allow you to disable irrelevant options. In the settings, I added request and response modifiers, which should be applied to each request and response at all levels. To simplify this as much as possible, I use SQLite. Django has it by default. I have transferred all configuration files to the database and saved all testing results in it. Algorithm Therefore, I found a very simple and flexible solution. It already works as an external integration test for three microservices, but I am the only one who uses it. It certainly does not override unit tests, but it complements them well. When I need to validate services, I use this Django tester to do that. Configuration File Example The settings have become simpler and are managed with Django Admin. I can easily turn them off, change, and watch history. I could go further and make a full-fledged UI, but this is more than enough for me for now. Request Body JSON JSON { "from_date": "dfsdsf", "some_type": "dfsdsf", "goods": [ { "price": "112323", "name": "123123", "quantity": 1 } ], "total_amount": "2113213" } Response Body JSON JSON { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } Backend Response Body JSON JSON { "status": 1, "data": { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } } What It Gives You In what way can this service be useful? Sometimes, even with tests, you need to pull services from the outside, or several services in a chain. Services can also be black boxes. A database can be run in Docker. As for an API...an API can be run in Docker as well. You need to set a host, port, and configuration files and run it. Why the Unusual Solution? Some may say that you can use third-party tools integration tests or some other tests. Of course, you can! But, with limited resources, there is often no time to apply all this, and quick and effective solutions are needed. And here comes the simplest Django service that meets all requirements.
Cloud computing has revolutionized software organizations' operations, offering unprecedented scalability, flexibility, and cost-efficiency in managing digital resources. This transformative technology enables businesses to rapidly deploy and scale services, adapt to changing market demands, and reduce operational costs. However, the transition to cloud infrastructure is challenging. The inherently dynamic nature of cloud environments and the escalating sophistication of cyber threats have made traditional security measures insufficient. In this rapidly evolving landscape, proactive and preventative strategies have become paramount to safeguard sensitive data and maintain operational integrity. Against this backdrop, integrating security practices within the development and operational workflows—DevSecOps—has emerged as a critical approach to fortifying cloud environments. At the heart of this paradigm shift is Continuous Security Testing (CST), a practice designed to embed security seamlessly into the fabric of cloud computing. CST facilitates the early detection and remediation of vulnerabilities and ensures that security considerations keep pace with rapid deployment cycles, thus enabling a more resilient and agile response to potential threats. By weaving security into every phase of the development process, from initial design to deployment and maintenance, CST embodies the proactive stance necessary in today's cyber landscape. This approach minimizes the attack surface and aligns with cloud services' dynamic and on-demand nature, ensuring that security evolves in lockstep with technological advancements and emerging threats. As organizations navigate the complexities of cloud adoption, embracing Continuous Security Testing within a DevSecOps framework offers a comprehensive and adaptive strategy to confront the multifaceted cyber challenges of the digital age. Most respondents (96%) of a recent software security survey believe their company would benefit from DevSecOps' central idea of automating security and compliance activities. This article describes the details of how CST can strengthen your cloud security and how you can integrate it into your cloud architecture. Key Concepts of Continuous Security Testing Continuous Security Testing (CST) helps identify and address security vulnerabilities in your application development lifecycle. Using automation tools, it analyzes your complete security structure and discovers and resolves the vulnerabilities. The following are the fundamental principles behind it: Shift-left approach: CST promotes early adoption of safety measures by bringing security testing and mitigation to the start of the software development lifecycle. This method reduces the possibility of vulnerabilities in later phases by assisting in the early detection and resolution of security issues. Automated security testing: Critical to CST is automation, which allows for consistent and quick evaluation of security measures, scanning for vulnerabilities, and code analysis. Automation ensures consistent and rapid security evaluation. Continuous monitoring and feedback: As part of CST, safety incidents and feedback chains are monitored in real-time, allowing security vulnerabilities to be identified and fixed quickly. Integrating Continuous Security Testing Into the Cloud Let's explore the phases involved in integrating CST into cloud environments. Laying the Foundation for Continuous Security Testing in the Cloud To successfully integrate Continuous Security Testing (CST), you must prepare your cloud environment first. Use a manual tool like OWASP or an automated security testing process to perform a thorough security audit and ensure your cloud environments are well-protected to lay a robust groundwork for CST. Before diving into integrating Continuous Security Testing (CST) within your cloud infrastructure, it's crucial to lay a solid foundation by meticulously preparing your cloud environment. This preparatory step involves conducting a comprehensive security audit to identify vulnerabilities and ensure your cloud architecture is fortified against threats. Leveraging tools such as the Open Web Application Security Project (OWASP) for manual evaluations or employing sophisticated automated security testing processes can significantly aid this endeavor. Conduct a detailed inventory of all assets and resources within your cloud architecture to assess your cloud environment's security posture. This includes everything from data storage solutions and archives to virtual machines and network configurations. By understanding the full scope of your cloud environment, you can better identify potential vulnerabilities and areas of risk. Next, systematically evaluate these components for security weaknesses, ensuring no stone is left unturned. This evaluation should encompass your cloud infrastructure's internal and external aspects, scrutinizing access controls, data encryption methods, and the security protocols of interconnected services and applications. Identifying and addressing these vulnerabilities at this stage sets a robust groundwork for the seamless integration of Continuous Security Testing, enhancing your cloud environment's resilience to cyber threats and ensuring a secure, uninterrupted operation of cloud-based services. By undertaking these critical preparatory steps, you position your organization to leverage CST effectively as a dynamic, ongoing practice that detects emerging threats in real-time and integrates security seamlessly into every phase of your cloud computing operations. Establishing Effective Security Testing Criteria The cornerstone of implementing Continuous Security Testing (CST) within cloud ecosystems is meticulously defining the security testing requirements. This pivotal step involves identifying a holistic suite of testing methodologies encompassing your security landscape, ensuring thorough coverage and protection against potential vulnerabilities. A multifaceted approach to security testing is essential for a robust defense strategy. This encompasses a variety of criteria, such as: Vulnerability scanning: Systematic examination of your cloud environment to identify and classify security loopholes. Penetration testing: Simulated cyber attacks against your system to evaluate the effectiveness of security measures. Compliance inspections: Assessments to ensure that cloud operations adhere to industry standards and regulatory requirements. Source code analysis: Examination of application source code to detect security flaws or vulnerabilities. Configuration analysis: Evaluation of system configurations to identify security weaknesses stemming from misconfigurations or outdated settings. Container security analysis: Analysis focused on the security of containerized applications, including their deployment, management, and orchestration. Organizations can proactively identify and rectify security vulnerabilities within their cloud architecture by selecting the appropriate mix of these testing criteria. This proactive stance enhances the overall security posture and embeds a culture of continuous improvement and vigilance across the cloud computing landscape. Adopting a comprehensive and systematic approach to security testing ensures that your cloud environment remains resilient against evolving cyber threats, safeguarding your critical assets and data effectively. Choosing the Right Security Testing Tools for Automation The transition to automated security testing tools is critical for achieving faster and more accurate security assessments, significantly reducing the manual effort, workforce involvement, and resources dedicated to routine tasks. A diverse range of tools exists to support this need, including Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and safety measures for Infrastructure as Code (IaC) etc. These technologies are easy to integrate into Continuous Integration/Continuous Deployment (CI/CD) pipelines and improve security by finding and fixing vulnerabilities before development. More than half of DevOps teams conduct SAST scans, 44% conduct DAST scans, and almost 50% inspect containers and dependencies as part of their security measures. However, when choosing the right tool for automation, consider features like ease of use, the ability to get updated with the vulnerability, and ROI vs. the cost of the tool. When choosing the right automation tools, evaluating them based on several critical factors beyond their primary functionalities is vital. The ease of integration into existing workflows, their capacity for timely updates in response to new vulnerabilities, and the balance between their cost and the return on investment they offer are crucial considerations. These factors ensure that the selected tools enhance security measures and align with the organization's overall security strategy and resource allocation, facilitating a more secure and efficient development lifecycle. Continuous Monitoring and Improvement The bedrock of maintaining an up-to-date and secure cloud infrastructure lies in the practices of continuous monitoring and iterative improvement throughout the entirety of its lifecycle. Integrate your cloud log with Security Information and Event Management (SIEM) capabilities to get centralized security intelligence and initiate continuous monitoring and improvement. Similarly, ELK Stack (Elasticsearch, Logstash, Kibana) is another tool that can help you visualize, collect, and analyze your log data. Regularly monitoring your security landscape and adapting based on the insights gleaned from testing and monitoring outputs are essential. Such a proactive approach not only aids in preemptively identifying and mitigating potential threats but also ensures that your security framework remains robust and adaptive to the ever-evolving cyber threat landscape. Strategic Risk Management and Mitigation Efforts Effective security management requires a strategic approach to evaluating and mitigating vulnerabilities, guided by their criticality, exploitability, and potential repercussions for the organization. Utilizing threat modeling techniques enables a targeted allocation of resources, focusing on areas of highest risk to reduce exposure and avert potential security incidents. Following identifying critical vulnerabilities, devising and executing a comprehensive risk mitigation strategy is imperative. This strategy should encompass a range of solutions tailored to diminish the identified risks, including the deployment of software patches and updates, the establishment of enhanced security protocols, the integration of additional safeguarding measures, or even the strategic overhaul of existing systems and processes. Organizations can fortify their defenses by prioritizing and systematically addressing vulnerabilities based on severity and impact, ensuring a more secure and resilient operational environment. Benefits of Continuous Security Testing in the Cloud There are numerous benefits of using continuous security testing in cloud environments. Early vulnerability detection: Using CST, you can identify security issues early on and address them before they pose a risk. Enhanced security quality: To better defend your cloud infrastructure against cyberattacks, security testing gives it an additional layer of protection. Enhanced innovation and agility: CST enables faster release cycles by identifying risks early on, allowing you to take proactive measures to counter them. Enhanced team collaboration: CST promotes collaboration between different teams to cultivate a culture of collective accountability for security. Compliance with industry standards: By routinely assessing its security controls and procedures, you can lessen the possibility of fines and penalties for noncompliance with corporate policies and legal requirements. Conclusion In the rapidly evolving landscape of cloud computing, Continuous Security Testing (CST) emerges as a cornerstone for safeguarding cloud environments against pervasive cyber threats. By weaving security seamlessly into the development fabric through automation and vigilant monitoring, CST empowers organizations to detect and neutralize vulnerabilities preemptively. The adoption of CST transcends mere risk management; it fosters an environment where security, innovation, and collaboration converge, propelling businesses forward. This synergistic approach elevates organizations' security posture and instills a culture of continuous improvement and adaptability. As businesses navigate the complexities of the digital age, implementing CST positions them to confidently address the dynamic nature of cyber threats, ensuring resilience and securing their future in the cloud.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. Thirty years later, I still love being a software engineer. In fact, I've recently read Will Larson's "Staff Engineer: Leadership beyond the management track," which has further ignited my passion for solving complicated problems programmatically. Knowing that employers continue to accommodate the staff, principle, and distinguished job classifications provides a breath of fresh air for technologists who want to thrive as an engineer. Unfortunately, with the good sometimes comes the not-so-good. For today's software engineer, the reality isn't quite so ideal, as Toil continues to find a way to disrupt productivity on a routine basis. One common example is when it comes to deploying our artifacts — especially into production environments. It's time to place a higher priority on deployment automation. The Traditional Deployment Lifecycle The development lifecycle for a software engineer typically centers around three simple steps: develop, review, and merge. Building upon these steps, the following flowchart illustrates a traditional deployment lifecycle: Figure 1. Traditional development lifecycle In Figure 1, a software engineer introduces an update to the underlying source code. Once a merge request is created, the continuous integration (CI) tooling executes unit tests and performs static code analysis. If these steps are completed successfully, a second software engineer performs a code review for the changes. If those changes are approved, the original software engineer merges the source code changes into the main branch. At this point, the software engineer starts a deployment to the development environment (DEV), which is handled by the continuous delivery (CD) tooling. In this example, the release candidate is deployed to dev and additional tests (like regression tests) are executed. If both steps pass, the software engineer initiates a deployment into the QA environment via the same CD tooling. Next, the software engineer creates a change ticket to release the source code update into the production environment (Prod). Once the approving manager approves the change ticket, the software engineer initiates a deployment into Prod. This step instructs the CD tooling to perform the Prod deployment. Unfortunately, there are several points in the flow where human-based tasks are involved. Time to Focus on Toil Elimination Google Site Reliability Engineering's Eric Harvieux defined Toil as noted below: "Toil is the kind of work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows." Software engineers should alter their mindset to become cognizant on identifying Toil in their roles and responsibilities. Once Toil has been acknowledged, tasks should be established to eliminate these items that do not foster productivity. Most Agile teams reserve 20% of sprint capacity for backlog tasks. Toil elimination is always a perfect candidate for such work. In Figure 1, the following tasks were handled manually and should be viewed as Toil: Start DEV Deployment Start QA Deployment Create Change Ticket Manager Approve Change Ticket Start Prod Deployment In order to drive toward next-gen deployment lifecycles, it is important to become Toil-free. DevOps Lifecycle and Deployment Automation While Toil elimination is an important aspect to next-gen deployment lifecycles, deployment automation via DevOps is equally as important. Using DevOps pipelines, we can automate the deployment flow as noted below: Create the release candidate image when the merge-to-main event is completed. Automate the deployment to DEV when a new release candidate is created. Continue to deploy to QA upon successful deployment to DEV. Create the change ticket programmatically once QA deployment is successful. In implementing the automation noted above, three of the five human-based tasks are eliminated. In order to mitigate the remaining two tasks, the observability platform can be leveraged. Service owners often rely on their observability platform to support and maintain applications running in production. By extending the coverage to include the lower environments (like DEV and QA), it is possible for DevOps pipelines to interact with metrics being emitted during the deployment lifecycle using an open-source tool such as Ansible. This means that as the DevOps pipelines are making changes to an environment, an Ansible Playbook can be created to monitor a given set of metrics in order to know if the deployment is running as expected. If no anomalies or errors surface, the pipeline will continue running. Otherwise, the current task will abort and the prior state of the deployment will be restored. As a result, using a collection of metrics defined by the service owner and the observability platform, the need for manager approval becomes diminished. This is because the approval of the merge request is where the change was analyzed. Additionally, the approving manager step often was added because a better alternative did not exist. With the manager approval step replaced, the deployment to Prod can be triggered by the same DevOps pipeline. In taking this approach, the status of the change ticket can reflect the actual status as tasks are completed by the automation. Example statues include Created, To Be Reviewed, Approved, Started, In Progress, and Completed (or Completed With Errors). Next-Gen Deployment Lifecycle By eliminating Toil and introducing DevOps automation via pipelines, a next-gen deployment lifecycle can be created. Figure 2. Next-gen deployment lifecycle In Figure 2, the deployment lifecycle becomes much smaller and no longer requires the approving manager role. Instead, the observability platform is leveraged to monitor the DevOps pipelines. With the next-gen deployment lifecycle, the software engineer performs the merge-to-main step after the merge request has been approved. From this point forward, the remainder of the process is completely automated. If any errors occur during the CD pipeline steps, the pipeline will stop and the prior state will be restored. Compared to Figure 1, all of the existing Toil has been completely eliminated and teams can get into the mindset that a merge-to-main event is the entry point to the next production release. What's even more exciting is the improvement that teams will see with their commit-to-deploy ratios in adopting this strategy. Shattering Unjustified Blockers When considering next-gen deployment lifecycles, three common thoughts are often raised: 1. We Need to Let the Business Know Before We Can Deploy Software engineers should strive to enhance or update services in a manner where business-level approval is not a requirement. The use of feature flags and versioned URIs are examples of how automated releases can be achieved without impacting existing customers. However, it is always a great idea to communicate what features and fixes are planned — along with the expected time frames. 2. The Manager Should Know What Is About to Be Deployed While this is a fair statement, the approving manager's knowledge of the update should be established during the sprint planning stage (or similar). Once a given set of work begins, the expectation is that the work will be completed and deployed during the given development iteration. Like software engineers, managers should adopt the mindset that merge-to-main ultimately results in a deployment to production. 3. At Least One Person Should Approve Changes Before They Are Pushed to Production This is a valid statement, and it actually occurs during the merge request stage. In fact, the remaining approval in the next-gen deployment lifecycle is where it is for a very good reason. When one or more approvers review a merge request, they are in the best position — at the best point in time — to review and challenge the work that is being completed. Thereafter, it makes far better sense for the observability platform to monitor the DevOps pipelines for any unexpected issues. Conclusion The traditional development lifecycle often includes human-based approvals and an unacceptable amount of Toil. This Toil not only becomes a source of frustration but also impacts the productivity and mental health of the software engineer over time. Teams should make it a priority to eliminate Toil in their roles and responsibilities and drive toward next-gen development lifecycles using DevOps pipelines and integrating with existing observability platforms. Taking this approach will allow teams to adopt a "merge-to-main equals deploy-to-Prod" mindset. In doing so, commit-to-deploy ratios will improve as a nice side effect. Thirty years ago, I found my passion as a software engineer, and 30 years later, I still love being a software engineer. In fact, I am even more excited for the path ahead, free from human-based approvals due to DevOps automation and Toil elimination. Have a really great day! Resources: "Staff Engineer: Leadership beyond the management track" by Will Larson, 2021 "Identifying and Tracking Toil Using SRE Principles" by Eric Harvieux, 2020 "Monitoring as code with Sensu + Ansible" by Jef Spaleta, 2021 This is an excerpt from DZone's 2024 Trend Report,The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures.For more: Read the Report
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. Forbes estimates that cloud budgets will break all previous records as businesses will spend over $1 trillion on cloud computing infrastructure in 2024. Since most application releases depend on cloud infrastructure, having good continuous integration and continuous delivery (CI/CD) pipelines and end-to-end observability becomes essential for ensuring highly available systems. By integrating observability tools in CI/CD pipelines, organizations can increase deployment frequency, minimize risks, and build highly available systems. Complementing these practices is site reliability engineering (SRE), a discipline ensuring system reliability, performance, and scalability. This article will help you understand the key concepts of observability and how to integrate observability in CI/CD for creating highly available systems. Observability and High Availability in SRE Observability refers to offering real-time insights into application performance, whereas high availability means ensuring systems remain operational by minimizing downtime. Understanding how the system behaves, performs, and responds to various conditions is central to achieving high availability. Observability equips SRE teams with the necessary tools to gain insights into a system's performance. Figure 1. Observability in the DevOps workflow Components of Observability Observability involves three essential components: Metrics – measurable data on various aspects of system performance and user experience Logs – detailed event information for post-incident reviews Traces – end-to-end visibility in complex architectures to help you understand requests across services Together, they comprehensively picture the system's behavior, performance, and interactions. This observability data can then be analyzed by SRE teams to make data-driven decisions and swiftly resolve issues to make their system highly available. The Role of Observability in High Availability Businesses have to ensure that their development and SRE teams are skilled at predicting and resolving system failures, unexpected traffic spikes, network issues, and software bugs to provide a smooth experience to their users. Observability is vital in assessing high availability by continuously monitoring specific metrics that are crucial for system health, such as latency, error rates, throughput, saturation, and more, therefore providing a real-time health check. Deviations from normal behavior trigger alerts, allowing SRE teams to proactively address potential issues before they impact availability. How Observability Helps SRE Teams Each observability component contributes unique insights into different facets of system performance. These components empower SRE teams to proactively monitor, diagnose, and optimize system behavior. Some use cases of metrics, logs, and traces for SRE teams are post-incident reviews, identification of system weaknesses, capacity planning, and performance optimization. Post-Incident Reviews Observability tools allow SRE teams to look at past data to analyze and understand system behavior during incidents, anomalies, or outages. Detailed logs, metrics, and traces provide a timeline of events that help identify the root causes of issues. Identification of System Weaknesses Observability data aids in pinpointing system weaknesses by providing insights into how the system behaves under various conditions. By analyzing metrics, logs, and traces, SRE teams can identify patterns or anomalies that may indicate vulnerabilities, performance bottlenecks, or areas prone to failures. Capacity Planning and Performance Optimization By collecting and analyzing metrics related to resource utilization, response times, and system throughput, SRE teams can make informed decisions about capacity requirements. This proactive approach ensures that systems are adequately scaled to handle expected workloads and their performance is optimized to meet user demands. In short, resources can be easily scaled down during non-peak hours or scaled up when demands surge. SRE Best Practices for Reliability At its core, SRE practices aim to create scalable and highly reliable software systems using two key principles that guide SRE teams: SRE golden signals and service-level objectives (SLOs). Understanding SRE Golden Signals The SRE golden signals are a set of critical metrics that provide a holistic view of a system's health and performance. The four primary golden signals are: Latency – Time taken for a system to respond to a request. High latency negatively impacts user experience. Traffic – Volume of requests a system is handling. Monitoring helps anticipate and respond to changing demands. Errors – Elevated error rates can indicate software bugs, infrastructure problems, or other issues that may impact reliability. Saturation – Utilization of system resources such as CPU, memory, or disk. It helps identify potential bottlenecks and ensures the system has sufficient resources to handle the load. Setting Effective SLOs SLOs define the target levels of reliability or performance that a service aims to achieve. They are typically expressed as a percentage over a specific time period. SRE teams use SLOs to set clear expectations for a system’s behavior, availability, and reliability. They continuously monitor the SRE golden signals to assess whether the system meets its SLOs. If the system falls below the defined SLOs, it triggers a reassessment of the service's architecture, capacity, or other aspects to improve availability. Businesses can use observability tools to set up alerts based on predetermined thresholds for key metrics. Defining Mitigation Strategies Automating repetitive tasks, such as configuration management, deployments, and scaling, reduces the risk of human error and improves system reliability. Introducing redundancy in critical components ensures that a failure in one area doesn't lead to a system-wide outage. This could involve redundant servers, data centers, or even cloud providers. Additionally, implementing rollback mechanisms for deployments allows SRE teams to quickly revert to a stable state in the event of issues introduced by new releases. CI/CD Pipelines for Zero Downtime Achieving zero downtime through effective CI/CD pipelines enables services to provide users with continuous access to the latest release. Let’s look at some of the key strategies employed to ensure zero downtime. Strategies for Designing Pipelines to Ensure Zero Downtime Some strategies for minimizing disruptions and maximizing user experience include blue-green deployments, canary releases, and feature toggles. Let’s look at them in more detail. Figure 2. Strategies for designing pipelines to ensure zero downtime Blue-Green Deployments Blue-green deployments involve maintaining two identical environments (blue and green), where only one actively serves production traffic at a time. When deploying updates, traffic is seamlessly switched from the current (blue) environment to the new (green) one. This approach ensures minimal downtime as the transition is instantaneous, allowing quick rollback in case issues arise. Canary Releases Canary releases involve deploying updates to a small subset of users before rolling them out to everyone. This gradual and controlled approach allows teams to monitor for potential issues in a real-world environment with reduced impact. The deployment is released to a wider audience if the canary group experiences no significant issues. Feature Toggles Feature toggles, or feature flags, enable developers to control the visibility of new features in production independently of other features. By toggling features on or off, teams can release code to production but activate or deactivate specific functionalities dynamically without deploying new code. This approach provides flexibility, allowing features to be gradually rolled out or rolled back without redeploying the entire application. Best Practices in CI/CD for Ensuring High Availability Successfully implementing CI/CD pipelines for high availability often requires a good deal of consideration and lots of trial and error. While there are many implementations, adhering to best practices can help you avoid common problems and improve your pipeline faster. Some industry best practices you can implement in your CI/CD pipeline to ensure zero downtime are automated testing, artifact versioning, and Infrastructure as Code (IaC). Automated Testing You can use comprehensive test suites — including unit tests, integration tests, and end-to-end tests — to identify potential issues early in the development process. Automated testing during integration provides confidence in the reliability of code changes, reducing the likelihood of introducing critical bugs during deployments. Artifact Versioning By assigning unique versions to artifacts, such as compiled binaries or deployable packages, teams can systematically track changes over time. This practice enables precise identification of specific code iterations, thus simplifying debugging, troubleshooting, and rollback processes. Versioning artifacts ensures traceability and facilitates rollback to previous versions in the case of issues during deployment. Infrastructure as Code Utilize Infrastructure as Code to define and manage infrastructure configurations, using tools such as OpenTofu, Ansible, Pulumi, Terraform, etc. IaC ensures consistency between development, testing, and production environments, reducing the risk of deployment-related issues. Integrating Observability Into CI/CD Pipelines Observing key metrics such as build success rates, deployment durations, and resource utilization during CI/CD provides visibility into the health and efficiency of the CI/CD pipeline. Observability can be implemented during continuous integration (CI) and continuous deployment (CD) as well as post-deployment. Observability in Continuous Integration Observability tools capture key metrics during the CI process, such as build success rates, test coverage, and code quality. These metrics provide immediate feedback on the health of the codebase. Logging enables the recording of events and activities during the CI process. Logs help developers and CI/CD administrators troubleshoot issues and understand the execution flow. Tracing tools provide insights into the execution path of CI tasks, allowing teams to identify bottlenecks or areas for optimization. Observability in Continuous Deployment Observability platforms monitor the CD pipeline in real time, tracking deployment success rates, deployment durations, and resource utilization. Observability tools integrate with deployment tools to capture data before, during, and after deployment. Alerts based on predefined thresholds or anomalies in CD metrics notify teams of potential issues, enabling quick intervention and minimizing the risk of deploying faulty code. Post-Deployment Observability Application performance monitoring tools provide insights into the performance of deployed applications, including response times, error rates, and transaction traces. This information is crucial for identifying and resolving issues introduced during and after deployment. Observability platforms with error-tracking capabilities help pinpoint and prioritize software bugs or issues arising from the deployed code. Aggregating logs from post-deployment environments allows for a comprehensive view of system behavior and facilitates troubleshooting and debugging. Conclusion The symbiotic relationship between observability and high availability is integral to meeting the demands of agile, user-centric development environments. With real-time monitoring, alerting, and post-deployment insights, observability plays a major role in achieving and maintaining high availability. Cloud providers are now leveraging drag-and-drop interfaces and natural language tools to eliminate the need for advanced technical skills for deployment and management of cloud infrastructure. Hence, it is easier than ever to create highly available systems by combining the powers of CI/CD and observability. Resources: Continuous Integration Patterns and Anti-Patterns by Nicolas Giron and Hicham Bouissoumer, DZone Refcard Continuous Delivery Patterns and Anti-Patterns by Nicolas Giron and Hicham Bouissoumer, DZone Refcard "The 10 Biggest Cloud Computing Trends In 2024 Everyone Must Be Ready For Now" by Bernard Marr, Forbes This is an excerpt from DZone's 2024 Trend Report,The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures.For more: Read the Report
In today's fast-paced digital landscape, DevOps has emerged as a critical methodology for organizations looking to streamline their software development and delivery processes. At the heart of DevOps lies the concept of collaboration between development and operations teams, enabled by a set of practices and tools aimed at automating and improving the efficiency of the software delivery lifecycle. One of the key enablers of DevOps practices is platform engineering. Platform engineers are responsible for designing, building, and maintaining the infrastructure and tools that support the development, deployment, and operation of software applications. In essence, they provide the foundation upon which DevOps practices can thrive. The Foundations of Platform Engineering Platform Engineering in the Context of DevOps Platform engineering in the context of DevOps encompasses the practice of designing, building, and maintaining the underlying infrastructure, tools, and services that facilitate efficient software development processes. Platform engineers focus on creating a robust platform that provides developers with the necessary tools, services, and environments to streamline the software development lifecycle. Below are the key aspects, responsibilities, and objectives of platform engineering in DevOps: Infrastructure Management and Infrastructure as Code (IaC): Designing, building, and maintaining the infrastructure that supports software development, testing, and deployment; implementing Infrastructure as Code practices to manage infrastructure using code, enabling automated provisioning and management of resources Automation: Automating repetitive tasks such as builds, tests, deployments, and infrastructure provisioning to increase efficiency and reduce errors Tooling selection and management: Selecting, configuring, and managing the tools and technologies used throughout the software development lifecycle, including version control systems, CI/CD pipelines, and monitoring tools Containerization and orchestration: Utilizing containerization technologies like Docker and orchestration tools such as Kubernetes to create scalable and portable environments for applications. Continuous Integration and Continuous Deployment (CI/CD) pipelines: Designing, implementing, and maintaining CI/CD pipelines to automate the build, test, and deployment processes, enabling rapid and reliable software delivery Observability: Implementing monitoring and logging solutions to track the performance, health, and behavior of applications and infrastructure, enabling quick detection and resolution of issues Security and compliance: Ensuring that the platform adheres to security best practices and complies with relevant regulations and standards, such as GDPR or PCI DSS Scalability and resilience: Designing the platform to be scalable and resilient, capable of handling increasing loads and recovering from failures gracefully Collaboration and communication: Facilitating collaboration between development, operations, and other teams to streamline workflows and improve communication for enhanced productivity Overall, the primary objective of platform engineering is to establish and maintain a comprehensive platform that empowers development teams to deliver high-quality software efficiently. This involves ensuring the platform's security, scalability, compliance, and reliability while leveraging automation and modern tooling to optimize the software development lifecycle within a DevOps framework. Stand-Alone DevOps vs Platform-Enabled DevOps Model Characteristic Stand-Alone DevOps Model Platform-Enabled DevOps Model Infrastructure management Each team manages its own infrastructure independently. Infrastructure is managed centrally and shared across teams. Tooling Teams select and manage their own tools and technologies. Common tools and technologies are provided and managed centrally. Standardization Limited standardization across teams, leading to variation Standardization of tools, processes, and environments Collaboration Teams work independently, with limited collaboration. Encourages collaboration and sharing of best practices Scalability Limited scalability due to disparate and manual processes Easier scalability through shared and automated processes Efficiency May lead to inefficiencies due to duplication of efforts Promotes efficiency through shared resources and automation Flexibility More flexibility in tool and process selection for each team Requires adherence to standardized processes and tools Management overhead Higher management overhead due to disparate processes Lower management overhead with centralized management Learning curve Each team must learn and manage their chosen tools. Teams can focus more on application development and less on tool management. Costs Costs may be higher due to duplication of infrastructure. Costs can be optimized through shared infrastructure and tools. These characteristics highlight the differences between the Stand-Alone DevOps model, where teams operate independently, and the Platform-Enabled DevOps model, where a centralized platform provides tools and infrastructure for teams to collaborate and work more efficiently. Image source Infrastructure as Code (IaC) Infrastructure as Code (IaC) plays a crucial role in platform engineering and DevOps by providing a systematic approach to managing and provisioning infrastructure. It allows teams to define their infrastructure using code, which can be version-controlled, tested, and deployed in a repeatable and automated manner. This ensures consistency across environments, reduces the risk of configuration errors, and increases the speed and reliability of infrastructure deployment. IaC also promotes collaboration between development, operations, and other teams by enabling them to work together on infrastructure configurations. By treating infrastructure as code, organizations can achieve greater efficiency, scalability, and agility in their infrastructure management practices, ultimately leading to improved software delivery and operational excellence. Automation and Orchestration Automation and orchestration are foundational to DevOps, providing the framework and tools to streamline and optimize the software development lifecycle. Automation eliminates manual, repetitive tasks, reducing errors and increasing efficiency. It accelerates the delivery of software by automating build, test, and deployment processes, enabling organizations to release new features and updates quickly and reliably. Orchestration complements automation by coordinating and managing complex workflows across different teams and technologies. Together, automation and orchestration improve collaboration, scalability, and reliability, ultimately helping organizations deliver better software faster and more efficiently. Platform engineers perform various automation and orchestration tasks to streamline the software development lifecycle and manage infrastructure efficiently. Here are some examples: Infrastructure provisioning: Using tools like Terraform or AWS CloudFormation, platform engineers automate the provisioning of infrastructure components such as virtual machines, networks, and storage. Configuration management: Tools like Ansible, Chef, or Puppet are used to automate the configuration of servers and applications, ensuring consistency and reducing manual effort. Continuous Integration/Continuous Deployment (CI/CD): Platform engineers design and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI to automate the build, test, and deployment processes. Container orchestration: Platform engineers use Kubernetes, Docker Swarm, or similar tools to orchestrate the deployment and management of containers, ensuring scalability and high availability. Monitoring and alerting: Automation is used to set up monitoring and alerting systems such as Prometheus, Grafana, or ELK stack to monitor the health and performance of infrastructure and applications. Scaling and auto-scaling: Platform engineers automate the scaling of infrastructure based on demand using tools provided by cloud providers or custom scripts. Backup and disaster recovery: Automation is used to set up and manage backup and disaster recovery processes, ensuring data integrity and availability. Security automation: Platform engineers automate security tasks such as vulnerability scanning, patch management, and access control to enhance the security posture of the infrastructure. Compliance automation: Tools are used to automate compliance checks and audits to ensure that infrastructure and applications comply with regulatory requirements and internal policies. Self-service portals: Platform engineers create self-service portals or APIs that allow developers to provision resources and deploy applications without manual intervention. These examples illustrate how platform engineers leverage automation and orchestration to improve efficiency, reliability, and scalability in managing infrastructure and supporting the software development lifecycle. Implementing and Managing Containers and Microservices Platform engineers play a crucial role in implementing and managing containers and microservices, which are key components of modern, cloud-native applications. They are responsible for designing the infrastructure and systems that support containerized environments, including selecting the appropriate container orchestration platform (such as Kubernetes) and ensuring its proper configuration and scalability. Platform engineers also work closely with development teams to define best practices for containerization and microservices architecture, including image management, networking, and service discovery. They are responsible for monitoring the health and performance of containerized applications, implementing automated scaling and recovery mechanisms, and ensuring that containers and microservices are deployed securely and compliant with organizational standards. Manage CI/CD Pipelines Platform engineers design and manage CI/CD pipelines to automate the build, test, and deployment processes, enabling teams to deliver software quickly and reliably. Here's how they typically do it: Pipeline design: Platform engineers design CI/CD pipelines to meet the specific needs of the organization, including defining the stages of the pipeline (such as build, test, and deploy) and the tools and technologies to be used at each stage. Integration with version control: They integrate the CI/CD pipeline with version control systems (such as Git) to trigger automated builds and deployments based on code changes. Build automation: Platform engineers automate the process of building software artifacts (such as executables or container images) using build tools (such as Jenkins, GitLab CI/CD, or CircleCI). Testing automation: They automate the execution of tests (such as unit tests, integration tests, and performance tests) to ensure that code changes meet quality standards before deployment. Deployment automation: Platform engineers automate the deployment of software artifacts to various environments (such as development, staging, and production) using deployment tools (such as Kubernetes, Docker, or AWS CodeDeploy). Monitoring and feedback: They integrate monitoring and logging tools into the CI/CD pipeline to provide feedback on the health and performance of deployed applications, enabling teams to quickly detect and respond to issues. Security and compliance: Platform engineers ensure that the CI/CD pipeline adheres to security and compliance requirements, such as scanning for vulnerabilities in dependencies and enforcing access controls. Monitoring and Logging Solutions Platform engineers implement and maintain monitoring and logging solutions to ensure the health, performance, and security of applications and infrastructure. They select and configure monitoring tools (such as Prometheus, Grafana, or ELK stack) to collect and visualize metrics, logs, and traces from various sources. Platform engineers set up alerting mechanisms to notify teams about issues or anomalies in real time, enabling them to respond quickly. They also design and maintain logging solutions to centralize logs from different services and applications, making it easier to troubleshoot issues and analyze trends. Platform engineers continuously optimize monitoring and logging configurations to improve performance, reduce noise, and ensure compliance with organizational policies and standards. Role of Platform Engineers in Ensuring Security and Compliance Platform engineers play a vital role in ensuring security and compliance in a DevOps environment by implementing and maintaining robust security practices and controls. They are responsible for designing secure infrastructure and environments, implementing security best practices, and ensuring compliance with regulatory requirements and industry standards. Platform engineers configure and manage security tools and technologies, such as firewalls, intrusion detection systems, and vulnerability scanners, to protect against threats and vulnerabilities. They also work closely with development and operations teams to integrate security into the software development lifecycle, including secure coding practices, regular security testing, and security training. Additionally, platform engineers monitor and audit infrastructure and applications for compliance with internal security policies and external regulations, taking proactive measures to address any issues that arise. Conclusion Platform engineering is a cornerstone of DevOps, providing the foundation upon which organizations can build efficient, scalable, and reliable software delivery pipelines. By designing and managing the infrastructure, tools, and processes that enable DevOps practices, platform engineers empower development teams to focus on delivering value to customers quickly and efficiently. Through automation, standardization, and collaboration, platform engineering drives continuous improvement and innovation, helping organizations stay competitive in today's fast-paced digital landscape. As organizations continue to embrace DevOps principles, the role of platform engineering will only become more critical, ensuring that they can adapt and thrive in an ever-changing environment.