The final step in the SDLC, and arguably the most crucial, is the testing, deployment, and maintenance of development environments and applications. DZone's category for these SDLC stages serves as the pinnacle of application planning, design, and coding. The Zones in this category offer invaluable insights to help developers test, observe, deliver, deploy, and maintain their development and production environments.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
A developer's work is never truly finished once a feature or change is deployed. There is always a need for constant maintenance to ensure that a product or application continues to run as it should and is configured to scale. This Zone focuses on all your maintenance must-haves — from ensuring that your infrastructure is set up to manage various loads and improving software and data quality to tackling incident management, quality assurance, and more.
Modern systems span numerous architectures and technologies and are becoming exponentially more modular, dynamic, and distributed in nature. These complexities also pose new challenges for developers and SRE teams that are charged with ensuring the availability, reliability, and successful performance of their systems and infrastructure. Here, you will find resources about the tools, skills, and practices to implement for a strategic, holistic approach to system-wide observability and application monitoring.
The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
Kubernetes in the Enterprise
In 2022, Kubernetes has become a central component for containerized applications. And it is nowhere near its peak. In fact, based on our research, 94 percent of survey respondents believe that Kubernetes will be a bigger part of their system design over the next two to three years. With the expectations of Kubernetes becoming more entrenched into systems, what do the adoption and deployment methods look like compared to previous years?DZone's Kubernetes in the Enterprise Trend Report provides insights into how developers are leveraging Kubernetes in their organizations. It focuses on the evolution of Kubernetes beyond container orchestration, advancements in Kubernetes observability, Kubernetes in AI and ML, and more. Our goal for this Trend Report is to help inspire developers to leverage Kubernetes in their own organizations.
Back in September 2020, I was researching open-source architectures - meaning looking at several customer solutions from my employer at the time - and developing a generic view of these solutions for certain use cases. One of the use cases is known as financial payments. Back in 2020, I kicked off a series covering this architecture with the article Payments Architecture - An Introduction. The series consisted of six articles and covered architectural diagrams from logical and schematic to detailed views of the various use cases we uncovered. The architectures presented were based on open-source cloud-native technologies, such as containers, microservices, and a Kubernetes-based container platform. The major omission in this series was to avoid discussing any aspect of cloud-native observability. This series will take a look at fixing that omission with an open-source, standards-based, cloud-native observability platform that helps DevOps teams control the speed, scale, and complexity of a cloud-native world for their financial payments architecture. The Baseline Architecture Let's review the use case before we dive into the architectural details. For a bit of background, we review what the base open-source generic architecture was focused on for the financial payments use case. Cloud technology is changing the way payment services are architected. This series builds on the original baseline solution that was used to modernize payment services. Note that you can find this and other open-source architecture solutions in their repository; feel free to browse them at your leisure. The rest of this article will focus on introducing cloud-native observability to your payments architecture. These projects are providing you with a way to create a cloud-native payment architecture that's proven to work in multiple customer cloud environments, with a focus in this article on the addition of cloud-native observability. Now let's look at the use case definition and lay the groundwork for diving into how you can add cloud-native observability to your architecture. Defining Payments To start off our story, the following statement has been developed to help in guiding our architecture focus for this financial payments use case: Financial institutions enable customers with fast, easy-to-use, and safe payment services available anytime, anywhere. With this guiding principle, the baseline architecture was developed to help everyone to be successful in providing their customers with a robust payment experience. We continue to expand on this baseline adding a robust cloud-native observability platform that provides the control, visibility, speed, and scale that financial service providers are looking for. All diagrams and components used to expand the architecture with cloud-native observability conform to the original design guidelines and leverage the same diagram tooling. We'll start by revisiting the original logical diagram and sharing insights into the additional (newer) components related to the cloud-native observability architecture. You'll discover the technologies used to collect and store both metrics and tracing data through the use of a collector and the Chronosphere platform. This is followed by specific examples worked out in schematic diagrams (physical architecture) that explore a few specific financial payments use case examples and provide you with guides for mapping cloud-native observability components to your own existing architectures. You'll see both networked connections and data flow examples worked out to help you in your understanding of the generic views being provided. Next, let's quickly cover how you can make use of the content in this financial payments project and leverage both the ability to download images of the architectures and to be able to open the diagrams in the open-source tooling for adjustment to your own needs. Using the Payments Project The architecture collection provided insights into all manner of use cases and industries researched between 2019-2022. The architectures each provide a collection of images for each diagram element as well as the entire project as a whole, for you to make use of as you see fit. If we look at the financial payments project, you'll see a table of contents allowing you to jump directly to the topic or use case that interests you the most. You can also just scroll down through each section and explore at your leisure. Each section provides a short description covering what you see, how it's important to the specific payment topic listed, and a walk through the diagram(s) presented. You can download any of the images to make use in any way you like. At the bottom of the page, you will find a section titled Download Diagrams. If you click on the Open Diagrams link, it will open all the available logical, schematic, and detailed diagrams in the diagram tooling we used to create them. This makes them readily available for any modifications you might see fit to make for use in your own architectures, so feel free to use and modify! Finally, there is a free online beginners guide workshop available focused on using diagram tooling, please explore to learn tips and tricks from the experts. Series Overview The following overview of this o11y architecture series on adding cloud-native observability to your financial payments architecture can be found here: Financial payments introduction Financial payments' logical observability elements Adding observability to immediate payments example Adding observability to financial calculations example Catch up on any articles you missed by following one of the links above as the series progresses. Next in this series, explore the cloud-native observability elements needed for any financial payment processing architecture.
Test-driven development has gained popularity among developers as it gives developers instant feedback and can identify defects and problems early. Once the application is developed, during continuous integration (CI), it’s also important to run automatic tests to cover all possible scenarios before it gets built and deployed to detect defects and issues early. Apache Kafka® provides a distributed, fault-tolerant streaming system that allows applications to communicate with each other asynchronously. Whether you are building microservices or data pipelines, it allows applications to be more loosely-coupled for better scalability and flexibility. But at the same time, it also introduces a lot more complexity to the environment. This post will cover how to solve the complexity problem by introducing Quarkus, Redpanda, and Testcontainers in a demo that showcases all the components in action. How Redpanda, Quarkus, and Testcontainers connect for TDD and CI Test Driven Development TDD is an iterative process that involves writing a test that specifies the desired behavior of the code, and then testing your code during and after development. This cycle is repeated continuously to produce clean and reliable code. Quarkus Quarkus and JUnit5 set your testing framework. Simply add the dependencies and define your test case with @QuarkusTest and @Test annotations. Then code with Quarkus in dev mode, which allows you to make changes to the code and see the results immediately—without having to restart the application or rebuild the container. This significantly improves productivity and enables continuous testing with the press of a button. Run mvn quarkus:dev to start the dev mode. You will be prompted with the following instructions after dev mode starts: -- Tests paused, press [r] to resume, [h] for more options> After making changes to your application while in the mode, test the application by typing r to rerun the entire test sets. You will get instant test feedback. 2023-04-17 10:15:40,311 INFO [io.qua.test] (Test runner thread) All tests are now passing -- Press [r] to resume testing, [:] for the terminal, [h] for more options> [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 10.343 s [INFO] Finished at: 2023-04-17T10:15:43-04:00 [INFO] ------------------------------------------------------------------------ Set up the Development Environment Setting up the environment for development is not difficult, but it does require some patience and work to get all the components set up. For example, to get Kafka running as the streaming platform, it requires knowledge to configure Apache ZooKeeper™ and the brokers and constantly recreating topics for the clean state—along with other databases. Testcontainers Testcontainers allow you to define the testing environment in code, ensuring that the environment is consistent and repeatable across multiple runs. Testcontainers works everywhere with access to a Docker environment both in your CI platform and locally. Even if you don't manage a local Docker installation, Testcontainers Cloud allows you to run containers during Testcontainers tests giving you plenty of resources and simplifying parallelization. There is a wide range of container images that can be used. Testcontainers create and manage the containerized testing environments. It automatically pulls and starts the configured container services, and tears down the environment. If you have multiple tests running, the resources needed will eventually add up and use up more memory. Even worse, it’ll take more time to get things ready. A bulky streaming system is not ideal. Redpanda As a streaming data platform for developers, this simple binary all-in-one form (broker, schema registry, and HTTP proxy) with extreme performance and super lightweight has won over many Kafka developers’ hearts. In fact, Quarkus by default uses Redpanda as a streaming platform for their dev service. Using Redpanda will save you both time and resources to run all tests. Using Testcontainers and Redpanda, you can define a fast, lightweight streaming service container where an instance of Redpanda broker will be created and run for testing. Simply add the following line of code: Java RedpandaContainer redpanda = new RedpandaContainer("docker.redpanda.com/redpandadata/redpanda:latest"); Sometimes you want to share a Redpanda broker instance between multiple tests, so it’s best to allow QuarkusTestResourceLifecycle to take it. It will call Testcontainers to start up before the test and let it destroy the containers once all tests are done. Automate Testing in Continuous Integration Tests should always run during the CI process automatically to ensure that code changes do not break existing functionality or introduce new bugs. But testing Kafka applications can be challenging since most of the CI runs on separate platforms such as Jenkins, CircleCI, Gitlab or GitHub. It isn’t easy to manage and integrate a Kafka cluster in those environments. This problem can be easily solved with Testcontainers spinning up lightweight and fast containerized Redpanda brokers for testing. It’s using the same codebase and configurations you set while testing locally. You would not need to change a line of code and push the changes you made for CI. Note that Github Actions and many CI platforms by default refresh the container images for every run. The size of the image will impact the time to run the pipelines. At the time of writing this blog: Redpanda has a size of 129 MB compared to others at 414 MB. As part of the test, just add the following step to your CI, (we’re using Github Action in this case): - name: Build run: mvn build - name: Test run: mvn test The Demo This is a simple application that instantly identifies the age of customers as their data comes in. All customer information is streamed into the “customer” topic, the application will filter the underage customer list into a special topic “underage”. This demo was created with the TDD approach: Define the desired behavior in a test Using Testcontainers to configure the Redpanda Write the application and run a test to verify if it has the expected result Once it’s done, both the test and application code are committed to the repo, and the continuous integration pipeline will rerun the test to make sure the code changes do not break existing functionality or introduce new bugs. To set up Redpanda as yours in your test folder, create a new class that implements QuarkusTestResourceLifecycleManager, this manager will manage the lifecycle of the services that are needed for testing. And the following command for better testing experience, by default, Redpanda will allocate a fixed amount of CPU core and memory, which is ideal for production but for development and CI environment with limited hardware capacity, Testcontainer runs the Redpanda broker in development mode, where the memory and CPU limits are not enforced, and turning off fsync() for even faster streaming. Java private static DockerImageName REDPANDA_IMAGE = DockerImageName .parse("docker.redpanda.com/redpandadata/redpanda:latest"); Lastly, create a test. In Quarkus, make sure to add the QuarkusTestResourceLifecycleManager to the test by adding the annotation to the Test class. Java @QuarkusTest @QuarkusTestResource(TestResource.class) public class TestAgeRestriction {...} Here’s a quick video showing how TDD works with Quarkus, Redpanda, and Testcontainers. You can find all the code in this GitHub repository. Summary TDD requires you to write the tests before coding. Running tests can be challenging for Kafka developers, but Quarkus, Redpanda, and Testcontainers, made continuous testing possible. Developers no longer need to set up separate Kafka instances for development and CI. Testcontainers will initiate independent fast and lightweight Redpanda brokers in a container which are smaller and about half of the size compared to Kafka. The same unit tests can be re-used in the CI pipeline without the need to reconfigure and eliminate the need to set up separate brokers just for testing. To get started with Redpanda, check the documentation and if you have any questions please comment below.
This is a question that I hear on a fairly regular basis, not just internally but from external customers as well. So it’s one that I would like to help you walk through so that you can really figure out what makes sense in your organization, and I think the answer is probably going to surprise you a little bit. I think probably the most important thing to understand is this isn’t a versus question. You don’t have to have one or the other. As a matter of fact, I would argue, and I think that many people would agree, that SRE is actually an essential component of DevOps, and a good, properly implemented DevOps method leads to the necessity of SRE when it comes to deployment. So there are two sides to the same coin, so that will obviously lead to a little bit of confusion because DevOps is the development methodology; it’s all about integrating your development teams and your operations teams. It’s about knocking down those silos between them. It’s about ensuring that everybody is singing the same songbook, and that’s very important. SRE is in charge of automating all of the things and making sure that you never go down. Two sides of the same coin There are really two parts of the same group, so let’s look at the differences because they do have some differences. Probably the first and largest one is that when we think about our DevOps.The DevOps guys, particularly your developers, are doing the Core Development. They are answering the question, “What do we want to do?” they are working with product, they’re working with sales, they’re working with marketing to develop the design and deploy. What is it that we do? They’re working on the core. On the other hand, SRE is not working on the Core Development. What they are working on is the implementation of the core, they are working on the deployment, and they are constantly giving feedback back to that core development group to say, “Hey, something that you guys have designed isn’t working exactly the way that you think that it is” If you want to think about it this way DevOps is trying to develop. SRE is saying how we deploy and maintain and run to solve this problem. It’s theoretical versus practical. Ideally, they’re talking to each other every day because SRE should be logging defects; they should be logging tickets back with development. Still, probably most importantly, they need to understand that they have the same goals. These groups should never be aligned against one another. And so, they do have to have a common understanding. Let’s see about the most important part; we’re going to talk about failure because failure is not necessarily failure; it’s just a way of life. It doesn’t matter what you deploy. It doesn’t matter how well it goes; it will happen. There is a failure budget or an error budget where things will go wrong. SRE team, when it comes to failure, they’re going to anticipate it, they’re going to monitor it, they’re going to log it, they’re going to record everything, and ideally, they can identify a failure before it happens. They’re going to have predictive analytics that will say, “All right, this thing is going to go bad based on what we’ve seen before.” So, SRE is responsible for mitigating some of those failures through monitoring, logging, and doing the preemptive parts. So we’ll do the monitors, we’ll do the logs. SRE is also going to lead all of your post-actual failure incident management. They’re going to get you through the incident, to begin with, and then they’re going to hot wash it, and when it’s done, you have to get Dev online because these are the guys who are going to solve the core problem; some RCAs might be solved by SRE internally. Then SRE team will integrate the fix into their monitoring and their logging efforts to make sure that we don’t get into another RCA for the same kind of problem. There are different skill sets. Core development DevOps, these are the guys that really love writing software. SRE is a little bit more of an investigative mindset, right? You have to be willing to go and do that analysis, figure out what things have gone wrong, and automate everything. But there’s a lot that they have in common. Everyone should be writing automation; everyone should get rid of toil as much as possible because we just don’t have the time to do manual tasks. When we can put the computers in charge of it, computers are not great at thinking on their own, but if you need it to do the same thing repeatedly, you can’t beat computing for that. And so, automation is key; you have a slightly different mindset. DevOps is going to automate deployment; they’re going to automate tasks; they’re going to automate features. SRE will automate redundancy and manual tasks that they can turn into programmatic tasks to keep the stack up.
If you know anything about St. Louis, it is likely the home of the Gateway Arch, the Cardinals, and St. Louis-style BBQ. But it is also home to a DevOps event that featured some fresh perspectives on scaling, migrating legacy apps to the cloud, and how to think about value when it comes to your applications and environments; DevOps Midwest. The quality of the conversations was notable, as this event drew experts and attendees who were working on interesting enterprise problems of scale, availability, and security. The speakers covered a wide range of DevOps topics, but throughout the day, a couple of themes kept showing up: DevSecOps and secrets management. Here are just a few highlights from this amazing event. Lessons Learned From Helping Agencies Adopt DevSecOps In his sessions "Getting from "DevSecOps" to DevSecOps: What Has Worked and What Hasn't- Yet." Gene Gotimer shared some of the stories from his life helping multiple US government agencies understand and adopt DevOps. Show Value From the Start, Don't Wait for Them to 'Get It' While he worked for DISA, the Defense Information Systems Agency, he helped them evaluate the path to go from manual releases to a more automated path. The challenge he faced was getting them to see the value of CI/CD. He focused on adding automated testing earlier in their process, which is at the heart of the DevSecOps' shift left' strategy. He met unexpected resistance as the teams failed to 'just get it' and see the long-term benefits of the new approach, mainly due to their lack of familiarity with testing best practices, resulting in making implementation decisions that caused overall longer testing cycles. His biggest takeaway from the experience was realizing the established team was not going to have an 'ah ha' moment where they just collectively see the value. Value needs to be clearly defined in the goals of a project. Ultimately, you need to show that the new path is not just 'good' but is it overall better, stated in values the existing team understands. Never Let a Crisis Go to Waste: Be Ready In another engagement, this time with the TSA, the Transportation Security Administration, he built a highly secure DevOps pipeline based on Chef that was capable of automatically updating dependencies on hundreds of systems. After a year of work, he was limited to only showing demos of his tools, and he was restricted to a sandbox. The fear of a new approach and the reluctance to change meant he was only able to roll out the new tooling only when it was a last resort. An emergency meant doing updates the old way could not meet a deadline that was driven by a new vulnerability, so they gave his new tooling a chance. After successfully updating all the systems in a few short hours, the department lead was joyous that the 'last year of work meant they could update so fast.' But the reality is it took them a year to hit a crisis point that forced the change. The tool had been ready for months. It was only after the whole team saw the new approach in action that resistance disappeared. The larger lesson he took away from this was that being ready paid off. If they had come to him and he had not been ready to meet the moment, then the team would have never experienced the benefits of an automated approach. Carrots Not Sticks The final story Gene shared was about his time working for ICE, U.S. Immigration, and Customs Enforcement. While there, he worked to improve their security while working in AWS GovCloud. While the team was practicing DevOps, they had over 150 security issues in their build process, which took over 20 minutes to complete. He and his team worked to lower those security issues to less than twenty overall while tuning the whole process to take around six minutes. Unfortunately, the new system was not approved or adopted for many months. Gene's security team had been trying to sell the security benefits, which never took priority for the DevOps team. While there was administrative buy-in, the new process was not adopted until, finally, a different, smaller team realized the new process was 3x faster than their approach. They saw security as a side benefit. In the end, as other teams started adopting the faster process, the overall security was improved. Gene stressed that they could have gone to the CIO and demanded that the new approach be used for security reasons, but knew that would mean even more resistance overall. What ended up working was showing that the new way was ultimately better, easier, and faster. Gene ended his talk by reminding us always to continue building interesting things and to keep learning and innovating. He left us with a quote from Andrew Clay Shafer, a pioneer in DevOps: "You’re either building a learning organization, or you are losing to someone who is." Secrets Management in DevOps Three different talks at DevOps Midwest dealt with secret security in DevOps explicitly. Two talks discussed security in the context of the migration of applications to the cloud, and one talked about the problems of secrets in code and how git can help keep you safe. Picking a (Safe) Cloud Migration Strategy In his talk "You're Not Just the Apps Guy Anymore; Embracing Cloud DevOps," John Richards of Paladin Cloud covered why moving to the cloud matters as well as the challenges and unique opportunities that migrating to the cloud brings. He laid out three migration strategies. 1. Lift and shift 2. Re-architecture 3. Rebuild with cloud-native tools In "lift and shift," you simply take the existing application and drop it into a cloud environment. This can bring the ability to scale on demand, but it also means you are not reaping the full benefits of the cloud. While this is the fastest and least costly method, you still need to spend time figuring out how to "connect the plumbing." Part of that plumbing is figuring out how to call for secrets in the cloud. Most likely, while the application lived on an on-prem server, the secrets were previously stored on the same machine. Setting and leveraging the built-in environment variables in the cloud is a good short-term step for teams crunched for time. He lays out better secret management approaches in the other migration paths. In a re-architecture, you start with a 'lift and shift' migration and slowly build onto it, changing the application slowly over time to take advantage of the scale and performance gains the cloud offers. This is a flexible path but requires a higher overall investment, but if done correctly, the team can maximize value while building for the future. This is a good time for more robust secrets management to be adopted, especially as more third-party services need authentication and authorization. Tools like Vault or the built-in cloud services can be rolled out as the application evolves. The third path is completely rebuilding the application with cloud-native tools. This is the most expensive migration path but brings the greatest benefits. This approach allows you to innovate, taking advantage of edge computing and technology that was simply not available when the legacy application was first created. This also means adopting new secret management tools immediately and across the whole team at once. This approach definitely requires the highest level of buy-in from all teams involved. John also talked about shared cloud responsibility. For teams used to controlling and locking down on-premises servers, it is going to be an adjustment to partner with the cloud providers to defend your applications. Living in a world of dynamic attack surfaces makes defense-in-depth a necessity, not a nice to have; secrets detection and vulnerability scanning are mandatory parts of this approach. Your cloud provider can only protect you so much… misconfiguration or leaving your keys out in the open will lead to bad things. How To Migrate to the Cloud Safely While John's talk took a high-level approach to possible migration paths, Andrew Kirkpatrick from StackAdapt, gave us a very granular view of how to actually perform migration in his session "Containerizing Legacy Apps with Dynamic File-Based Configurations and Secrets." Andrew walked us through the process of taking an old PHP-based BBS running on a single legacy server and moving it to containers, making it highly scalable and highly available in the process. He also managed to make it more secure along the way. He argued that every company has some legacy code that is still running in production and that someone has to maintain it, but nobody wants to touch it. The older the code, the higher the likelihood that patches will introduce new bugs. Andrew said that the sooner you move it to containers and the cloud, the better off everyone is going to be and the more value you can extract from that application. While the lift and shift approach might not seem like the best use of advanced tools like Docker Swarm or Helm, in all reality. "you can use fancy new tech to run terrible, ancient software, and the tech doesn't care. He warned that most tutorials out there make some assumptions you have to take into account. While they might get you to a containerized app, most tutorials do not factor in scale or security concerns. For example, if a tutorial just says to download an image, it does not tell you to make sure there are no open issues with the image on Docker Hub. If you downloaded the Alpine Linux Docker image in the three years that the unlocked root accounts issue went unsolved, your tutorial did not likely account for that. Once he got the BBS software running in a new container, he addressed the need to connect it to the legacy DB. He laid out a few paths for possibly managing the needed credentials, but the safest by far would be to use a secrets manager like Hashicorp Vault or Doppler. He also suggested a novel approach for leveraging these types of tools, storing configuration values. While secrets managers are designed to safely store credentials and give you a way to programmatically call them to the tool, those keys are all just arbitrary strings. There is no reason you could not store settings values alongside your secrets and programmatically call them when you are building a container. Leveraging Git to Keep Your Secrets Secret The final talk that mentioned keeping your secrets out of source code was presented by me, the author of this article. I was extremely happy to be part of the event and present an extended version of my talk, "Stop Committing Your Secrets - Git Hooks To The Rescue!" In this session, I laid out some of the findings of the State of Secrets Sprawl report: 10M secrets found exposed in 2022 in public GitHub repositories. That is a more than 67% increase compared to the six million found in 2021. On average, 5.5 commits out of 1,000 exposed at least one secret. That is a more than 50% increase compared to 2021. At the heart of this research is git, the most widely used version control system on earth and the defacto transportation mechanism for modern DevOps. In a perfect world, we would all be using secret managers throughout our work, as well as built-in tools like `.gitignore` to keep our credentials out of our tracked code histories. But even in organizations where those tools and workflows are in place, human error still happens. What we need is some sort of automation to stop us from making a git commit if a secret has been left in our code. Fortunately, git gives us a way to do this on every attempt to commit with git hooks. Any script stored in the git hooks folder that is also named exactly the same as one of the 17 available git hooks will get fired off by git when that git event is triggered. For example, a script called `pre-commit` will execute when `git commit` is called from the terminal. GitGuardian makes it very easy to enable the pre-commit check you want, thanks to ggshield. In just three quick commands, you can install, authenticate the tool, and set up the needed git hook at a global level, meaning all your local repositories will run the same pre-commit check. JavaScript $ pip install ggshield $ ggshield auth login $ ggshield install --mode global This free CLI tool can be used to scan your repositories as well whenever you want; no need to wait for your next commit. After setting up your pre-commit hook, each time you run `git commit,` GitGuardain will scan the index and stop the commit and tell you exactly where the secret is and what kind of secret is involved, and give you some helpful tips on remediation. DevSecOps Is a Global Community: Including the Midwest While many of the participants at DevOps Midwest were, predictably, from the St. Louis area, everyone at the event is part of a larger global community. A community that is not defined by geographic boundaries but is instead united by a common vision. DevOps believes that we can make a better world by embracing continuous feedback loops to improve collaboration between teams and users. We believe that if repetitive and time-consuming tasks can be automated, they should be automated. We believe that high availability and scalability go hand-in-hand with improved security. No matter what approach you take to migrate to the cloud or what specific platforms and tools you end up selecting, keeping your secrets safe should be a priority. Using services can help you understand the state of your own secrets sprawl on legacy applications as you are preparing to move through historical scanning and keep you safe as the application runs in its new home, thanks to our real-time scanning. And with ggshield, you can keep those secrets that do slip into your work out of your shared repos.
Retesting is a software testing technique that involves executing test cases again for a software application or system after defects have been fixed or changes have been made to ensure that the defects have been resolved and the changes made have not introduced new defects. The purpose of retesting is to verify that the previous defects have been fixed and that the application or system is working as expected. It is an important part of the software testing process as it helps to ensure that the application or system is functioning correctly and meets the specified requirements. Retesting can be performed manually or through automation, depending on the complexity and scope of the testing. However, it is typically carried out during the regression testing phase of the software testing life cycle. Retesting Example Here’s an example of retesting: Suppose you are testing an e-commerce website where users can place orders for various products. During initial testing, you find a defect where the shipping address is not saved correctly. You report the defect, and the development team fixes it by making changes to the code. After the defect is fixed, the next step is to perform retesting. You would execute the same test case that initially revealed the defect to ensure that the issue has been resolved. You would verify that the shipping address is now saved correctly and that the user can proceed with placing the order without any issues. Importance of Retesting Software If the test passes, then the defect is considered fixed, and the issue is closed. However, if the test fails again, then you would report the issue, and the development team would need to investigate further to determine what went wrong and how to fix it. Retesting software is crucial to ensure that the application or system is working as expected after changes have been made or defects have been fixed. Here are some key reasons why retesting is important: Verify That Defects Have Been Fixed When a defect is reported and fixed, it is important to ensure that the fix has actually addressed the issue. Retesting the same test case that initially revealed the defect helps to verify that the issue has been resolved. Detect Regression Issues When changes are made to the application or system, there is a risk of introducing new defects or regression issues. Retesting helps to identify these issues and prevent them from going unnoticed. Ensure Application Quality Retesting helps to ensure that the application or system is of high quality and meets the specified requirements. In addition, it helps to detect and fix any issues that may impact the user experience, such as incorrect data or functionality. Save Time and Costs Detecting and fixing defects early in the development process can save significant time and costs. In addition, retesting helps to catch issues early before they become more complex and costly to fix. Overall, retesting is an essential part of the software testing process that helps to ensure that the application or system is functioning correctly and meets the specified requirements. Pros and Cons of Retesting Retesting is an essential part of the software testing process, but like any testing technique, it has its advantages and disadvantages. Here are some pros and cons of retesting: Pros Verification of Fixes: Retesting helps to ensure that the defects identified in earlier testing phases have been fixed correctly. Regression Testing: Retesting also helps to detect regression issues and ensure that changes made to the system do not impact the existing functionality. Quality Assurance: Retesting helps to ensure that the system meets the required quality standards and delivers the expected functionality to end-users. Cost-Effective: Retesting is cost-effective as it helps to identify and fix defects in the early stages of development. Cons Time-Consuming: Retesting can be time-consuming, especially if there are a large number of test cases to be executed. Dependency on Initial Test Cases: The effectiveness of retesting depends on the quality of the initial test cases that are executed. Human Error: Retesting may be subject to human error, and the results may not always be accurate. Scope of Testing: Retesting only verifies the specific test cases that were initially executed. Therefore, it may not uncover issues that were not tested or not considered during the initial testing phase. In summary, while retesting is an essential technique for software testing, it is important to consider its pros and cons to determine the appropriate testing approach for a specific software application or system.
A system is a collection of interconnected components that work together to perform a defined function or set of functions. The components can be hardware, software, firmware, or a combination. In software, a system can refer to a collection of software modules, libraries, and frameworks that work together to achieve a specific goal. What Is System Testing? System testing is one type of software testing that involves testing the entire system as a whole to ensure that it meets the specified requirements and functions correctly. Systems testing is a critical phase of software development to ensure the system functions as expected and meets the specified requirements. System testing can be conducted in various ways, including manual testing, automated testing, or a combination of both. It involves testing the system at the integration and end-to-end levels to ensure that all the system components work together seamlessly. The main goal of system testing is to detect defects, errors, and inconsistencies in the system, including hardware, software, and other components. The following are some best practices for systems testing: Define Clear and Comprehensive Test Cases Ensure you understand the requirements and use cases for the system, and develop comprehensive test cases that cover all aspects of the system's functionality. Test cases should be well-defined and detailed and include all possible scenarios. Identify the requirements: The first step in defining clear and comprehensive test cases is to identify the requirements of the system or software tested. These requirements should be documented and agreed upon by all stakeholders. Define the scope: Once the requirements are identified, the scope of testing should be defined. This includes what functionalities will be tested, what data will be used, and what types of tests will be performed. Write test cases: You can start writing test cases based on the requirements and scope. Test cases should be written in clear, concise, and easy-to-understand language. Each test case should have a unique identifier, a summary of the test case, and steps to execute the test case. Include expected results: In addition to the steps to execute the test case, you should also include the expected results for each test case. This helps ensure that the test cases are comprehensive and cover all scenarios. Review and revise: Once the test cases are written, they should be reviewed and revised by a team of testers and stakeholders to ensure they are clear, comprehensive, and cover all requirements. Execute the test cases: Finally, the test cases should be executed, and the results should be documented. Any defects found during testing should be reported and tracked until they are resolved. Use Automated Testing Automated testing tools can be used to save time and reduce the potential for human error. A software testing technique involving specialized tools to execute test cases automatically is called automation testing without manual intervention. It is used to verify that the software meets its intended functionality, performance, and quality requirements. Here are some situations where automated testing can be helpful: Repetitive tests: Automated testing is ideal for tests that need to be executed repeatedly, such as regression testing, to save time and effort compared to manual testing. Large and complex systems: When a system is large and complex, manual testing can become impractical. Automated testing ensures that all system parts are working correctly. Performance testing: Automated testing tools can simulate multiple users to test the system's performance under various loads. Time-critical testing: Automated testing can run faster and provide immediate feedback, which is critical in time-sensitive projects. Regression testing: Automated testing is beneficial for regression testing, which involves verifying that new changes to the software have not affected existing functionality. Continuous integration/continuous delivery (CI/CD) pipelines: Automated testing is a crucial part of CI/CD pipelines, aiming to automate software development and release. Perform Tests Early and Often Begin testing as early as possible in the development cycle and continue testing throughout development. This approach will help identify defects early on, reducing the cost and time required to fix them. Utilize a Test Environment A dedicated test environment is necessary to simulate the production environment, including hardware, software, and data. Testing in a different environment helps to minimize the impact of errors and prevents interference with production systems. Conduct Thorough Performance Testing: Performance testing is critical to ensure the system can handle the expected load and usage. Tests should be conducted to measure the system's response times, resource utilization, and scalability under different loads. Ensure Compatibility Test the system's compatibility with different operating systems, hardware configurations, and other software that may interact with the system. Conduct Security Testing It is essential to ensure that the system is secure and that confidential data is protected. Security testing should include vulnerability scanning, penetration testing, Hardware security if the system is embedded system, and other security measures. Document Test Results Documenting test results, including issues found, helps track progress and ensure all defects are resolved. This documentation is helpful for future reference and can help identify trends and areas for improvement. Involve Stakeholders Stakeholders should be involved in the testing process, including end-users, developers, and management. This approach can help ensure that the system meets the expectations and requirements of all stakeholders. Final Verdict System testing is an essential process in the software development life cycle that ensures the system is ready for deployment and meets the end-users requirements. By guaranteeing these best practices are followed, you can ensure that the testing of your system is effective and efficient, leading to a successful project outcome.
Responsive website testing ensures that users have the best experience with your site, regardless of their device. The goal of testing responsive websites is to ensure a seamless experience across different digital devices. In this day and age, we live in a world where technology has enabled convenience, and we are now dependent on our devices to function. Because of the growing market for mobile devices, businesses are developing strategies to create user-friendly websites. They use mobile-first design, progressive web apps, single-page applications, and more. However, for a unified user experience across devices and platforms, we need to consider screen resolutions and device capabilities. To create highly responsive websites, you must understand the importance of testing your websites' responsiveness and develop a strategy to implement website responsive testing. This guide will teach you how to create responsive websites and understand their significance, best practices, and more. What Is Responsive Testing? Responsive website testing is a process that ensures your website works well on multiple devices by using CSS media queries based on the user's device where the website is accessed. In simpler terms, responsive testing is a process that enables you to check how well a website works on various types of devices, including desktops and smartphones. A website that responds well to all screen sizes and resolutions gives your business a competitive edge over other companies. The responsive design incorporates many elements, including media queries, flexible grids, and responsive typography. It makes it easy to build websites that adjust automatically to any screen size. While a responsive design may seem simple, incorporating it into ongoing projects is tricky; it's best to follow its principles before starting a new project. Website responsive testing is part of the final stage of responsive web design testing. It can be performed using the same toolset as cross-browser testing, which is responsible for improving a website's UI/UX. Responsive testing ensures that your website is not only cross-browser compatible but also adjusts to screen resolution changes. What Is Responsive Design? Responsive design aims to create websites and web applications that provide an optimal viewing experience across various devices and screen sizes. A site that is designed responsively will adjust its layout and content to fit the specific device and screen size on which it is being viewed, providing an easy-to-use and seamless experience for the user. To create a responsive design, designers, and developers use a combination of HTML, CSS, and JavaScript. Responsive design often involves using flexible layouts, grids, and images and using media queries in CSS to apply different styles based on the screen size. Why Is Responsive Website Testing Important? Responsive testing of web apps is crucial at every stage of development to ensure that the end-user requirements are met. Here are the following reasons highlighting the importance to test responsive websites: Plethora of Devices, OS, and Browsers: To ensure that your site's content is available to all visitors, verification of the content needs to be done for different screen-sized mobiles, operating systems, and browsers. While a site designed in one browser may appear as intended in another browser, it should not be assumed that this is necessarily the case. Need for Robustness: It is crucial to ensure that the website is loading at the same speed on different devices and browsers so that users do not become frustrated by lagging or timed-out content. If a website loads slowly or doesn't display correctly on mobile devices, users will have a poor experience. Therefore, testing a website's performance is essential for ensuring users have a positive experience on mobile-responsive websites. Website Navigation: When testing a mobile website, one of the most common defects found is that pages don't load as expected when navigated among the site's links. It also happens that links are missing, images are not loaded, or timed out while playing with navigation. Multiple Images and Videos: When creating a responsive website, it is essential to test whether all types of images and videos are displayed as expected on different phones, browsers, etc. Sometimes some videos play well on Android, but they don't even load on iOS, or some images appear broken on some versions of a mobile operating system while they are perfect on others. Such issues will give a terrible impression if testing is not done correctly. Advantages of Responsive Website Testing Responsive web testing is an integral part while delivering high-quality products. There are several advantages to performing website responsive testing: Improved User Experience: It is important to ensure that a website is fully responsive to ensure that all users, regardless of their device, have a positive and seamless experience when interacting with the website. Increased Accessibility: A responsive website can be accessed and used by a broader range of devices and screen sizes, which can help to expand its reach and accessibility. Enhanced Search Engine Optimization: Google's search algorithms give higher rankings to mobile-friendly websites, so having a responsive website can help to improve a website's search engine ranking. Cost Saving: Developing and maintaining a separate mobile website can be time-consuming and costly. A responsive website can save time and money by eliminating the need to create and maintain a different mobile version. Improved Conversion Rates: A responsive website can improve conversion rates by providing a consistent user experience across all devices, which helps to build trust and credibility with users. Types of Responsive Website Testing There are several types of responsiveness testing types that can be performed to ensure that a website is responsive and functions correctly on a variety of devices and screen sizes: Visual Regression Testing Visual regression testing is a part of regression testing that involves taking screenshots of a website on different devices and comparing them to ensure that the layout and design are consistent across all screens. Visual Layout Testing: Visual layout testing tools allow users to check that the website's layout adjusts correctly to different screen sizes and orientations and that all content is displayed correctly and is easily readable and navigable. Cross-browser testing: Cross-browser compatibility testing is the most significant kind of front-end testing. Testers can determine if a website functions as intended when viewed using various browsers/devices/OS combinations. In addition, cross-browser testing makes it possible for people to experience the same thing across multiple browsers. Functional Testing: This involves testing the website's functionality on different devices to ensure that all features and interactions work as expected. Functional testing evaluates the various functions of the application. It checks the user interface, database, APIs, client/server communication, security, and other components. Performance Testing: Performance testing assesses a product's quality and capability under varying workloads. Performance testing ensures that the system performs adequately, reliably, and with stability. This involves testing the website's performance on different devices and networks to ensure that it loads quickly and runs smoothly. Usability Testing: Usability testing is a technique for evaluating the user experience of a web product or service by testing it with users. This involves testing the website's usability on different devices to ensure it is easy for users to navigate and use. Common Use Cases of Website Responsive Testing Website responsive testing is a type of testing performed to ensure that a website or application is fully functional and provides a good user experience on a wide range of devices, including desktop computers, laptops, tablets, and smartphones. Here are some common use cases for responsive website testing: Verifying that a website or application is easy to use and navigate on devices with different screen sizes and resolutions. Ensuring that the layout of a website or application is visually appealing and easy to read on all devices. Testing all features and functionality of a website or application is fully operational on all devices. Verifying that a website or application loads quickly and performs well on all devices. Testing that a website or application is accessible and user-friendly for users with disabilities on all devices. Responsive website testing tools such as LambdaTest help you test the responsiveness of your websites and web apps across 3000+ real browsers, devices, and OS combinations. You can utilize its test automation frameworks like Selenium, Cypress, TestCafe, and Appium. It also provides mobile app testing capabilities with cloud-based Android Emulators and iOS Simulators. Learn how to perform website responsiveness testing on the LambdaTest platform. Different Types Of Responsive Testing Tools There are many different types of responsiveness testing tools available to test a website or web application: Device farms and emulators: Device farms and emulators can be useful tools for responsive testing because they allow you to test your site on a variety of different devices and screen sizes without physically accessing the devices. Online device farms such as LambdaTest provide you with access to real devices and emulators that emulate the experience of using actual mobile phones on your site. This can be particularly useful for testing devices you need access to or are unavailable in your location. Browser extensions: Several browser extensions, such as Window Resizer for Chrome and Responsive Design Mode for Firefox, allow you to test your site on different screen sizes within your browser. These extensions can be particularly useful for quickly testing small changes or for testing on a large number of screen sizes without having to access multiple devices physically. Responsive design frameworks: Responsive design frameworks provide a set of standardized styles and layout elements to help you create a responsive website. These frameworks often include tools for testing and debugging responsive layouts. Debugging tools: There are also several debugging tools available such as the dev-friendly LT Browser 2.0, that can help you identify and fix issues with your site's responsive design across a plethora of device viewports. It is essential to use various tools and techniques to ensure that your site is thoroughly tested and functions properly on all devices. How To Do Responsive Website Testing? With the proliferation of different devices and screen sizes, website responsive testing is essential so that websites and applications can adapt to different environments to provide a seamless user experience. To successfully perform responsive testing of your website, LambdaTest has just the right solution for you. Introducing the all-new Chromium-based FREE-to-use LT Browser 2.0. The LT Browser 2.0 is a website responsive testing tool designed to help developers and testers interact with the mobile view of their websites across multiple sizes of mobiles and tablets. It enables you to perform live-interactive testing of your responsive designs and check whether they work as they should across various devices. You can record videos, capture screenshots, debug responsive bugs with developer tools, and much more. With this browser for developers, you'll be able to test your responsive website across over 50+ mobile screen resolutions. It even allows you to add custom devices with various screen resolutions to test your locally hosted site as per your requirements. Moreover, it lets you compare your site's mobile versions across multiple devices with a side-by-side view. It is integrated with minor action features, which help to replicate the actions on other devices so that users can test how the site behaves on different devices. The all-new LT Browser 2.0 is based on the native Chromium rendering engine (Blink), offering faster performance and many other features, catering to all your testing needs. Check them out: Make use of Chrome settings and APIs. You can also install your preferred Chrome extensions to help you ease your responsive testing process. LT Browser 2.0 allows faster debugging by offering separate dev tools for each device viewport simultaneously. You can now generate and share multiple bug reports through your favorite project management tool. Interact, test, and develop with more than four device viewports simultaneously. A clear and intuitive UI with the option to switch to dark mode. Record your responsive tests with the entire screen or a particular tab to later review elusive bugs with your team. You can now also view your test history and have the option to clear cookies from the settings. Along with this, LT Browser 2.0 is constantly being updated with new features allowing developers to test the responsiveness more effectively. Best Practices For Website Responsive Testing We've looked at why responsive testing is important and its advantages. Now let's check out some best practices for performing website responsive testing effectively. Test on various devices: It is essential to test on a wide range of devices, including different models of smartphones, tablets, and desktop computers. This will help ensure that your website or application is fully functional and user-friendly on all devices. Test on real devices: It is essential to test on real devices rather than just using emulators or simulators. This is because emulators and simulators may not accurately reflect the performance and behavior of a website or application on a real device. Test on multiple browsers: It is essential to test on multiple browsers, as different browsers may render websites and applications differently. Some popular browsers to test include Google Chrome, Mozilla Firefox, Safari, and Microsoft Edge. Test with different screen sizes and resolutions: It is essential to test with different screen sizes and resolutions to ensure that the layout and design of your website or application are visually appealing and easy to read on all devices. Test with different network speeds: It is essential to test with varying network speeds to ensure that your website or application performs well and loads quickly on all devices. Test for accessibility: It is essential to test for accessibility to ensure that your website or application is user-friendly for users with disabilities. This includes testing for keyboard accessibility, screen reader compatibility, and using alternative text for images. Use automation: Automation testing can help perform repetitive tasks, such as testing the same functionality on multiple devices. However, it is also essential to perform manual testing to ensure that all aspects of the user experience are thoroughly tested. Wrapping Up The web is a vast, ever-changing environment. To keep pace with the ever-changing web and to ensure your website provides a seamless user experience, it's essential to test the responsiveness of your website on all popular browsers. Using automated responsiveness testing tools can save you time and resources and give you a competitive edge over similar websites. Responsive web testing involves testing a website or web application to ensure that it performs well and looks good on various devices with different screen sizes and resolutions. Responsive testing aims to provide users with a positive experience regardless of the device they use to access the site.
Development practices are constantly changing, and as testers, we must embrace change. One of the changes that we can experience is the move from monthly or quarterly releases to continuous delivery or continuous deployment. In addition, this move to continuous delivery or deployment offers testers the chance to learn new skills. A project that makes monthly or quarterly releases has a familiar rhythm, and the team builds toward the release date. The testers must test all the cards and conduct manual regression testing. Every test in the manual scripts needs to be executed, and possibly a report needs to be made on the results of these tests. Once the release has been made, it is possible that there will be bugs that were in the release that need to be fixed. The testers will also need to start running the same manual regression tests on the next release and report on them again. Testing for monthly or quarterly releases is a repetitive process. This process has been compared to the Greek myth of Sisyphus, who had to roll a stone to the top of a hill, and then when the stone rolled to the bottom of the hill, he had to roll it to the top of the hill again. Continuous delivery can be defined as “when all developers are working in small batches on the trunk, ….and when the trunk is kept in a releasable state, and when we can release at the push of a button”. A team that I worked with moved from making monthly releases to continuous delivery. The team kept the main branch in a state in which it could be deployed if needed, and the team made weekly releases. Continuous deployment can be defined as when, in addition to the practices that support continuous delivery, “we are deploying good builds into production on a regular basis through self-service (being deployed by Dev or by Ops). A team practicing continuous deployment deploys to production each time code is merged to the main branch. Teams that practice continuous delivery or continuous deployment use small batch sizes. That means that the “batch” of code that is deployed is small. “The theoretical lower limit for batch size is single piece flow, where each unit is performed one at a time”; this is what happens in a continuous deployment where each merge to the main branch is deployed to production. Teams that practice continuous delivery and continuous deployment try to create a flow of work at a sustainable pace and so should “enable and inject learning into daily work.” On the other hand, teams that make monthly releases build towards the release and so can not create this constant flow of work at a sustainable pace. When a team moves from monthly releases to continuous delivery or continuous deployment, a change that occurs is that there is no release candidate. Testing is not done on a release candidate; instead, it is done on feature branches that have been taken from the main branch, and when they are merged back into the main branch, they must be ready for release to production. The main branch is kept in a state where it can be released to production. For the tester, this means that the testing on a feature branch has a different pattern than that on a release candidate. When testing on a feature branch, you need to be confident that the new feature or fix in the feature branches does what it is supposed to do and that it has not caused any regression. When testing a monthly release, you can have time to execute manual regression tests, but if the main branch is to be kept in a state where it can be deployed to production, this is not possible. Regression testing needs to be automated if you are using continuous delivery or continuous deployment. Regression testing on monthly releases often includes running a large number of manual tests; however, if your team is using continuous delivery or continuous deployment, regression testing is usually automated via continuous integration. Continuous integration (CI) “means that every single time somebody commits any change” to the code, the change is integrated into the codebase. This entails running automated tests before and after merging the code into the main branch. This provides an opportunity for the tester to learn how to understand CI. The tester must understand CI, including what tests run as part of CI. There will always be gaps in the tests that run on CI. If the tester knows what the gaps in CI are, they can work out how to automate tests to fill them and execute manual tests to cover the gaps when this is required. Making mind maps of the tests that are automated can help identify gaps in the automated tests. Testers can also get involved in automating regression tests themselves, and in this way, testers can help to prevent bugs rather than find them. There are great free resources such as Test Automation University, LambdaTest Certifications, and Exercism which can help testers gain the skills they require to automate tests. There are also lots of resources to learn how to use javascript to aid testing. That regression testing is automated creates time for the tester that they can spend on exploratory testing. Exploratory testing is a powerful way to uncover issues, and so it will help the projects that the tester is working on. Having extra time to do exploratory testing will also help the tester develop their exploratory testing skills. Projects that use continuous delivery and continuous deployment also tend to have a microservices architecture. Microservices are services that have independent testing and deployment, and each service is simple. The tester has the opportunity to learn about the microservices, and ways to do this include talking to the developers, studying any architecture diagrams that exist, reading the readme files for each service in GitHub, and attending the developers’ meetings. Testers can build their relationships with developers by helping them test their code. In addition, the tester can share their knowledge of testing techniques, such as boundary value analysis, with developers, as this will help them test and produce better quality software. Release processes for monthly releases can involve a certain amount of pain for testers. Sometimes we are asked to take the responsibility of making the release decision even though the tester is often a junior member of the team; other times, testers have to take part in large committee meetings of stakeholders that decide if the software can be released. Release processes for continuous deployment and continuous delivery should be automated; this means that making releases does not put pressure on the tester. The tester’s input to the release in continuous delivery and continuous deployment is their testing; this means that we can focus on our testing and learn new testing skills. Development teams are not autonomous; they are open systems whose work affects other teams and is affected by other teams. This is systems thinking, and systems thinking contributed to continuous delivery and continuous deployment. Testers can learn to use systems thinking to enhance their testing and support their team. This can help the tester think beyond their role to understand what other systems are affected by their team’s work and what systems affect their team’s work. One of the lessons of systems thinking is that everyone shares responsibility, so no one person should be blamed when something goes wrong. This view should also be at the heart of every agile and lean development team that implements continuous delivery or continuous deployment. This is something that testers should learn from and take to heart. When there is a failure, we need to learn from it and not blame someone. Teams that make monthly releases can find that after each release, there is a flurry of activity fixing regression bugs that were deployed in the release. This does not happen with teams that practice continuous delivery or continuous deployment. Releases will be deployed multiple times a day using continuous deployment, and software will be deployed regularly with continuous delivery. These regular releases give the development team an opportunity to recover from bugs and incidents quickly, as fixes can be deployed quickly. Once a fix has been deployed, the tester can offer to take the lead in doing root cause analysis to find the root cause of the bug or incident. The tester can learn to use the Five Whys and Ishikawa Diagrams to do root cause analysis. Testers can also support the work of their teams by producing metrics that help the team measure their quality improvement. DORA metrics were identified by the Accelerate State of DevOps survey report by DORA. These metrics are designed to help teams that practice continuous delivery and continuous deployment find areas for improvement and know how they are doing. These are different from the metrics for monthly releases as they are not about how many bugs have been put into production but are about how quickly a team recovers from an error. Continuous delivery and continuous deployment offer testers opportunities to grow and learn new skills, so testers should embrace the opportunities when their teams move to continuous delivery and continuous deployment.
Grey box testing is the technique to debug and evaluate the vulnerabilities of software applications. In such a method, the testers have only limited knowledge of the internal structure or components of the software applications under test. In the Software Development Life Cycle (SDLC), testing is the crucial phase that verifies the quality of the software application. In this development cycle, different software testing techniques like black box and white box testing are used to evaluate the performance and function of software applications and ensure their quality. Black box testing involves validating the software without knowledge of its internal workings. In contrast, white box testing involves validating software with a full understanding of its internal workings. However, in some cases, having full knowledge of the internal workings of the software is impractical or impossible. This is where grey box testing comes in, combining white and black box testing elements to help debug and verify the functionality and behavior of software applications, such as code execution, database access, and server configuration. It allows testers to identify defects that may not be visible in black box testing while providing an external perspective on the software application under test. What Is Grey Box Testing? Grey box testing involves verifying the functionality of the software application and identifying vulnerabilities. Here, the tester is partially aware of the application's internal workings and components. However, they perform the grey box test based on their access to design documents, internal coding, database, and information about the requirements. Grey box testing evaluates web applications, performs integration and penetrating testing, tests business domains, conducts security assessments, etc. QA creates test cases and efficiently executes them using partial information on the data structures and algorithm. This allows finding context-specific errors in web applications using the test cases, as it has all the steps presented before the test initiation. Such errors only occur under specific conditions or in a particular context. For example, an error might occur when an end-user tries to submit a form with a specific input type, but the error does not occur with other input types. Grey box testing can be implemented in many ways, including penetration and integration testing. Such tests are non-obtrusive for the testers, meaning they do not disrupt the software application's normal functioning while testing occurs. Its primary focus is testing the interactions between different software application modules and finding any vulnerabilities attackers could exploit. For example, if a tester finds any defects in the software application and makes code modifications to fix it, they perform retesting in real-time. In this process, all levels of software applications, like the user interface or display layer, the business logic layer, the data storage layer, and the core code structure of the software, are tested. Like real-time testing, grey box testing focuses on testing all levels of any complicated software application to enhance test coverage. Why Grey Box Testing? Grey box testing is an integral part of the Software Testing Life Cycle (STLC) and is performed during the testing phase. In this phase, when the team executes test cases, the grey box test technique is needed to understand the internal workings and verify the functionality and performance of the software application. It is also performed to find errors that are not easily identified by performing black box testing. While testing the software application, grey box testing helps to identify the component or part of the software application with major performance issues. However, testing done in real-world scenarios gives accurate information on the test. You can simulate real-world scenarios like data interaction or network connectivity with a grey box test. Grey box tests also identify security vulnerabilities within the software application, like SQL injection attacks. This can be done by analyzing the code (the software application's source code) and logs (records generated by the software application that provide information about its behavior and operations). Thus, grey box testing is required in the testing phase to ensure the software application's quality and lower the risk of bugs post-software application release. Objectives of Grey Box Testing The main purpose of grey box testing is to find and verify defects due to incorrect code structure and its use in the application. Some of its other associated objectives are mentioned below: To improve the quality of the software application that covers functional testing and non-functional testing. To save time and effort required to test the software application. To test the software applications from the end-user's perspective. Grey Box Testing Example To understand grey box testing, you must know its practical use that will help you analyze where and when to implement it in the Software Development Life Cycle. Here are some examples of grey box tests. Example 1: Suppose you are testing a website. There is an error when you click on a link. Here, grey box testers make changes in the HTML code to fix the error. In such a scenario, you are performing both white and black box testing. For example, the white box test is done while changing the code. This is because testers have access to the internal workings of the website and are making changes to the code based on that knowledge. However, the black box test is done when the QA validates the change at the front end to test the behavior of the website without knowledge of its internal workings. Thus, its combination overall results in grey box testing. The tester has only partial knowledge of the internal workings of the website under test. Example 2: Grey box testers analyze the error codes found in the software applications. As they have access to the error code tables, they provide the cause for the error code. For example, while testing a website, if it gets an error code "internal server error 500", and the cause of this error showing in the table is a "server error," the tester uses this information to analyze the issue further. It gives the developer details to fix rather than just raising the issue. Example 3: Grey box testers also evaluate the log files that contain a record of events within a software application, such as errors, exceptions, and warnings. This helps them to find the root cause of the errors and gain insight into the behavior of the software application. Advantages of Grey Box Testing Grey box testing helps verify the quality of the software application in many different ways. You should consider its significance in testing to get started. Here are some advantages of grey box tests that give insights into how it benefits software application development. It is possible to review the software application's code, where you can easily detect hidden bugs. In addition, it also provides a way to evaluate software applications' functionality from the end user's perspective. Thus, with grey box tests, you can detect bugs early in the software application development process. With early detection of bugs, developers can fix them quickly. This will help avoid situations where software applications might get complicated, and the fixation of bugs becomes costly. Optimization of software applications' performance becomes easy. You can identify performance-related issues and inefficiencies in the software application. This can be done by analyzing the application's log and internal data. Grey box tests validate software applications with both a user interface and a backend. You can test the software application from the user's perspective with some knowledge of its internal workings. This will help you to test the software application more comprehensively and find defects that may not be detected from the user interface alone. Compared with black box testing, grey box testing is a more cost-effective method. This is because it does not require specialized skills and knowledge of the testers. As grey box testers do not need to know the application's source code deeply, an organization can hire testers with a broader range of skills and lower experience levels. Disadvantages of Grey Box Testing Gray box testing can be very useful in testing the functionality of software applications, but it can also have some drawbacks. The following are some drawbacks associated with it. In distributed software applications with components at different locations, grey box testing cannot trace a defect back to its source. It is because testers do not have complete knowledge of the internal working of the components of software applications. For example, any defect in one component could cause defects in other components. As the tester cannot access the first component's internal work, finding the main cause of the issues may be difficult. Grey box testers have limited access to the internal structure of the software applications. It makes it difficult for them to transverse all possible code paths. Thus, certain defects may go undetected. It is unsuitable for algorithm testing; instead, it focuses on functional testing. Creating grey box test cases is complex because testers have limited knowledge of the internal interaction of components of the software application. White Box vs. Black Box vs. Grey Box Testing Grey box testing differs from white and black box testing. In white-box testing and black-box testing, testers have end-to-end knowledge of the internal structure of applications. However, in black box testing, testers are unknown of the internal structure and components of the application. Compared with grey box testing, testers are typically known only for the internal components of applications but unaware of their interaction and associated function. Thus, grey box tests are a hybrid of black and white box testing, also called translucent box testing. The table below defines the difference between white box, black box, and white box testing. White Box Testing Black Box Testing Grey Box Testing Testers fully know the internal structure, design, and component and their interactions in the software application. Testers do not know about the internal structure, design, components, and their interaction in the software application. The testers partially know the internal structure, design, components, and their interaction in the software application. Testers should have good programming skills. Testers do not require high-level programming skills. Testers should have basic programming skills. Its execution is highly time-consuming. Its execution is comparatively less time-consuming. Its execution is less time-consuming than white box testing. It covers three techniques: Statement Coverage, Decision/Branch Coverage, and Condition Coverage. It covers six testing techniques: Equivalence Class Partitioning, Boundary Value Analysis, Decision Table, State Transitioning, Use Case Testing, and Cause-Effective Graph/ Graph-Based Technique. It covers four testing techniques: Matrix, Regression, Orthogonal Array, and Pattern Testing. It has a high level of granularity. It has a low level of granularity. It has a medium level of granularity. It is suitable for algorithm testing. It is not ideal for algorithm testing. It is not suitable for algorithm testing. When To Perform Grey Box Testing? The use of grey box testing varies depending on the software application's requirements, testing goals, level, scope, objective, and tools. For example, it can be performed at stages like acceptance testing and system testing. Let us see a different scenario where you can run grey box tests. You can use grey box tests when testing the interaction between the different components of the software application. It only requires having limited knowledge of how components interact. Whenever software applications undergo specific changes in features or updates, a grey box test should be used to verify its functionality corresponding to the changes. You can run a grey box test when testing the security of the software applications. It is done by simulating attacks from hackers with limited knowledge of the internal working of the software applications. Grey box tests are run to validate the functionality of the database by verifying data schema, data flow, and constraints. You can run grey box tests in API testing. It is needed to check the functionality of the APIs and their interaction with the software applications that use them. What To Test in Grey Box Testing? When running grey box tests, you should know about the software application's different aspects. It will help you understand the key areas that require a grey box test and give a clear path to start with it. Here are some important aspects that you should analyze while running grey box tests: The flow of specific inputs through the software application: Grey box testers identify and test the different inputs, such as user inputs, system inputs, or external inputs, and their source in the software application. The path through which such inputs are entered into the software application to the point they are processed and stored is traced. For example, if a grey box tester tests a web application that allows users to enter information in the forms, they will identify inputs like name, email address, and others. Further, they will trace the path of the inputs through the web application, evaluating how it is processed and stored. During such a process, the tester will detect any defects that could cause unexpected behavior in the web application. Potential vulnerabilities within an application's security system: Grey box tests validate restricted actions, including accessing sensitive data, manipulating URL and input parameters, injecting code, and brute force attacks. Testing expected output for a given set of inputs in software application: Grey box testing validates the output of a software application to ensure its alignment with the Software Requirements Specification. In other words, the tester verifies the actual output of the application with the expected output to find if there are any errors or defects. It will help ensure the software application provides the correct result, meets the end-user requirement, and functions bug-free. Identifies poorly structured and broken code paths of software applications: QA tests the applications for the code path intended to handle errors. For this, they create test cases that cause errors to ensure that the error-handling code path is functioning as expected. Correspondingly, a tester in grey box tests checks for the complex conditional statement in an application and creates test cases that perform all possible combinations of inputs and ensure that the code path is functional in all test scenarios. The behavior of the conditional loops: You can understand a conditional loop as a programming construct where a block of code is executed repeatedly until a specified condition becomes true. In grey box testing, you can test the behavior of the conditional loop by verifying the code to ensure the condition causing the loops and related action taken within the loop. Test access validation: In grey box tests, verify that the user with the correct access can only perform the action for the functions or activity of the software application. For example, the grey box testers test the endpoint to ensure only admin users can make inventory changes. Techniques of Grey Box Testing Grey box testing can be performed during the testing phase of software application development. There are different techniques you can use to ensure all defects are addressed. Some of those techniques are as follows: Matrix Testing It involves creating a risk matrix to prioritize test cases depending on their severity and consequence. A score is assigned to each element or all possible combinations of inputs and parameters of the software application being tested. Here, "element" refers to the software application's specific features, components, and aspects. For example, you can understand elements as particular functions or modules of software applications or specific user interface elements like a menu option. The term "combinations" are a specific set of values for any input or parameters of a software application. For example, if the software application has a login function, the password and username fields are input parameters or variables, and different combinations of values are entered for these inputs. Each element and possible combinations are evaluated based on its severity and consequence and assigned a score in the risk matrix. For this, matrix are created in the form of tables, with each row showing a unique combination of input values/elements and each column showing different parameters/elements. The score given shows the level of risk associated with elements/combinations and allows testers to prioritize their testing efforts by emphasizing elements with high scores. With this, you can address the most critical issue first and lower the risk to the software applications. This shows different possible combinations/elements and test cases designed for each to ensure all test scenarios are covered. Regression Testing It is the technique where any changes or update made to the software application does not impact its existing functionality. It is utilized in grey box tests because it allows testers to identify any potential defects due to the changes done in part of their software application. You can use the following testing strategies to perform regression testing in a grey box test: Retest All. Retest Risky Use Cases. Retest By Profile. Retest Change Segment. Retest Within Firewall. When you perform regression testing, you ensure the stability and reliability of software applications after the changes in the features or updates. This technique makes identifying early bugs possible in a grey box test before they cause any challenges in software application development. You can run regression tests in automation frameworks like Selenium for your web applications. It supports different browsers and platforms for web testing. Orthogonal Array Testing It is a statistical testing technique utilized for selecting the number of test cases that covers a large number of combinations of input parameters/variables of software applications. The techniques of grey box tests combine statistical and exhaustive testing to give comprehensive test coverage with a minimum number of test cases. This testing technique also covers all possible combinations of input parameters and variables by designing corresponding test cases. It is done using an orthogonal array to identify the variables/input parameters that have more impact on the function of software applications. Then you can create test cases that include all possible combinations of such variables/input parameters. Here an orthogonal array is a set of columns. Each column has a set of values for the variables/ input parameters. When you choose a value, it should be statistically independent of each other. With this, you can ensure that each combination is covered at least once, lowering the required test cases. Hence, you can design test cases that consider all variable/ input parameters without running many tests. Thus, this maximizes testing coverage and lowers the time and effort to run tests. Pattern Testing It can be performed in software applications that are developed based on a similar programming pattern or structure to previous software application projects. Some examples of patterns this test focuses on include loops, conditionals, function calls, and data structures. The software applications will likely have some defects that might have occurred in the previous software applications. With pattern testing, you can quickly identify the flaws in the current software applications. This technique is used in the grey box tests to improve the efficiency of the test. Here, you can use the knowledge of the internal workings of software applications to design test cases that focus on specific patterns or structures in the code. It helps quickly identify defects related to such patterns as logical inconsistencies and coding errors. How to perform Grey Box Testing? There are specific steps to be followed to run grey box tests. Let us explore those steps: 1. Make a list of all inputs from the black and white box testing method: You need to identify all the inputs obtained from the white box and black box test. Here the inputs are the command or data given to the software application being tested. It can come from different sources like user inputs, network inputs, or automated test scripts. 2. Make a list of all outputs from these inputs: Once you have identified the inputs, you need to identify all the expected outputs from such inputs. For example, if the input is a user request to execute an action in a software application, the expected outputs could include completing the action, an error message if the action is not performed, or an abrupt function if the bug is present. This will help you to get the expected behavior of the software application based on its architecture, design, and specifications and give a baseline for the testing process. 3. Make a list of all the key routes: This involves identifying the key path or route that the software application will take during the test. You can determine this path by analyzing software applications' architecture and design or their expected functions. Here the key route may involve interaction between different software application components, such as the user interface, database, and application logic. 4. Identify sub-functions for deep-level testing: In this step, you need to identify the software application's sub-functions (specific parts or components of the software application) that must be tested in detail. You have to break down the software application into smaller components and test them individually to ensure their proper function. You can prioritize these sub-functions based on their importance and criticality to the software application's overall functionality. 5. Make a list of inputs for sub-function: On identifying the sub-function of the software application, you need to make a list of all inputs that can be used to test each sub-functions. You can get those inputs from black-and-white box testing. 6. Make a list of all expected outputs from the sub-function: For each of the inputs for the sub-function, you now need to determine its expected outputs. This will allow the expected behavior of the sub-functions of software applications. 7. Run the sub-function test case: In this step, you are required to create a test case or to test each sub-function individually. Each test case should be run according to specified input values and verify that the actual output matches the expected output. 8. Repeat steps 4-8 for each additional subfunction: Steps 4-8 are repeated for all the sub-functions identified in step 4. 9. Check the reliability of the sub-function result: After executing the test cases, you need to analyze the result. This is required to check whether the sub-function is working accurately or find any defects that require it to be addressed. 10. Perform steps 7 and 8 for other remaining sub-function: Steps 7 and 8 are repeated for all the sub-functions tested in step 9 to ensure that they meet the expected results. Grey Box Testing Tools Grey box testing can be performed by both manual and automated approaches. In the manual approach, testers use their partial knowledge of the design and architecture of the application to find any issues. Testers perform manual tests to overcome challenges aroused by the issue. Even though it is a time-consuming process, it is an effective approach to identifying hard-to-find bugs. To reduce the time and effort in grey box tests, automated testing is crucial for specific repetitive tasks and components of applications accessible through APIs, SDKs, and other documentation. It uses automation testing frameworks and tools that make testing more efficient than manual testing. Let us learn some test tools for grey box tests. Selenium: UI test planSelenium is an open-source automation testing framework that tests web applications in browsers. You can use Selenium for cross-browser testing to check the application's working across various browsers and versions. Appium: Appium is an open-source test automation framework to test native, hybrid, and mobile web applications. You can perform automated tests with Appium on Android, iOS, and Windows platforms. Rational Functional Tester: It is an automated functional testing tool that helps create, manage, and execute grey box tests for software applications. It also supports other testing needs like regression and data-driven testing. Cucumber: It is an open-source testing tool supporting Behavior Driven Development (BDD). You can write tests in natural language and use a plain-text format called Gherkin to define tests. This can be done in different programming languages like Java, Ruby, Python, and JavaScript. Grey Box Testing on the Cloud The tools mentioned above effectively run grey box tests, which helps ensure the quality of the software application. However, cloud-based testing platforms can leverage the use and true capability of such automation testing frameworks and tools. It offers integration of automation testing frameworks like Selenium, Cypress, Playwright, Cucumber, Appium, etc., through which you can enable organizations to deploy and manage the software applications and their infrastructure in the cloud. For automation testing, Selenium is one of the most used frameworks for grey box testing. If you don't have to waste time and effort setting up an in-house test infrastructure, go for a cloud-based digital experience testing platform. Challenges of Grey Box Testing Grey box testing is an important part of developing software applications that ensure their functionality. However, it has some challenges that the testers and developers should know to overcome. When software application components under test fail due to errors or bugs, the continuing operation has to be terminated. Grey box tests can be performed by having limited knowledge of the internal functions of the component of the software application being tested. Therefore, this may create difficulty in finding all potential defects. Creating and executing grey box test cases is one of the great challenges. The reason is that testers need to understand the interaction between the software application's backend and front-end components. Maintenance efforts by the testers are often required. The reason is a change in the software application's architecture or design, which directly impacts the test cases and execution of the test. Grey box tests require access to the source code or specialized testing tools for their success. However, such access may not be available to all testers. Best Practices of Grey Box Testing To overcome the challenges in grey box testing and optimize its execution in software application development, it is important to follow its best practices. You should strive to gain a deep understanding of the software application under test, like its data flow, architecture, and backend and front-end components. Such knowledge will help to create effective test cases and identify potential defects. You should leverage automation tools in performing grey box tests. It will save time in testing by executing repetitive test cases. Combining testing techniques like functional, regression, and exploratory testing is always recommended in grey box tests. It will ensure a comprehensive test of the software application with early detection of bugs. Conclusion Grey box testing is one of the most beneficial software testing types that combines white and black box testing. With this, testers can access the software applications' internal workings without becoming overly biased or subjective in their testing approach. In a nutshell, it should be noted that grey box tests are one of the most effective testing techniques that help ensure the quality and reliability of software applications. It gives a complete view of a software application's functionality and performance. This allows us to identify and fix any potential issues before releasing the software application in the market.
In researching loggers, I noticed this library LogT. This frontend tool is different from other logs with its unique feature of colorful labels. The term log refers to a logarithmic function or a log statement in a program. In coding, a log statement is used to record events or messages during the execution of a program, usually for debugging. LogT is a colorful logger made for browsers. You can use this logger for your front-end projects. Features of LogT Colorful labels: This particular library is distinguished by its logs and by the importance of making its labels unique. Library size: This library occupies only 1.46KB of gzip, which means that it is tiny. Only logs equal to or above logLevel will be shown, showing log messages that are not visible to the logger. The logs that the log level shows are stated below: JavaScript const logger = new LogT(0); logger.warn("TAG", "warning message"); // Will not print anything to console logger.info("TAG", "info message"); // Will not print anything to console logger.debug("TAG", "debug message"); // Will not print anything to console logger.silly("TAG", "silly message"); // Will not print anything to console logger.showHidden(1); // Will print the warning message logger.showHidden(2); // Will print the info message logger.showHidden(5); // Will print the debug as well as silly message As we can see in the code block above, the words that the logger will print to the console and the ones it won't print. Hides less important log messages with its log level To use the custom logger, you have to override the default console methods anywhere on the web page. This is where the read console method comes in readConsole(). In replacing the default console.error, console.warm, console.log, and console.info, implement the logtlogger. Example: JavaScript const logger = new LogT(0); logger.readConsole(); console.error(new Error("test error")); // will be same as logger.error('console', new Error('test error')); console.warn("warn message"); // will be same as logger.warn('console', 'warn message'); console.log("info message"); // will be same as logger.info('console', 'info message'); console.log("log message"); // will be same as logger.debug('console', 'log message'); With the code blocks above, we can see how the custom logger logToverrides the default console. Its detailed type information is built with typescript and its autocomplete format. Installation To install this library, run the following command below: JavaScript $ npm i logt -S When you are done installing, go to your package.jason. This is how it would look, according to the version: JavaScript "logt": "^1.4.5", Usage This logger can be used for front-end projects, either as an ES6 module or directly in HTML. As an ES6 Module ES6 is a feature of the ECMAScript 6 (ES6) language specification that provides a way to organize and share Javascript code in a modular way. Create a file and name it logger.ts. JavaScript import LogT from "logt"; const LOG_TAG = "sample tag"; let logger; if (process.env.NODE_ENV === "production") { logger = new LogT("error"); // or logger = new LogT("none"); } else { logger = new LogT("silly"); } //To see the documentation for readConsole() for usage. // logger.readConsole(); logger.error(LOG_TAG, new Error("example error")); Usage in HTML LogT is also used in HTML; you can see it in the code blocks below: JavaScript <script src="https://cdn.jsdelivr.net/gh/sidhantpanda/logt/dist/logt.min.js"></script> <script> var LOG_TAG = 'sample tag'; var logger = createLogger('error'); //To see the documentation for readConsole() for usage. // logger.readConsole(); logger.error(LOG_TAG, new Error('example error')); </script> Documentation Logger initialization: In computer programming, logger initialization refers to the process of setting up and configuring a logging module in your software application. The purpose of a logger is to log messages from the application, such as errors, warnings, and informational messages, which can be used for debugging, performance analysis, and auditing purposes. During the logger initialization process, you typically specify the logging level, output destination (such as a file, the console, or a remote server), and the format of the log messages. The logger initialization in logT is stated in the code block below: JavaScript import LogT from "logt"; let logger; // Available log levels - -1 | 0 | 1 | 2 | 3 | 4 | 5 | 'none' | 'error' | 'warn' | 'info' | 'verbose' | 'debug' | 'silly'; // noneLogger will print nothing noneLogger = new LogT(-1); // or noneLogger = new LogT("none"); // if included via HTML script noneLogger = createLogger(-1); // or noneLogger = createLogger("none"); // errorLogger will only error messages errorLogger = new LogT(0); // or errorLogger = new LogT("error"); // if included via HTML script errorLogger = createLogger(0); // or errorLogger = createLogger("error"); // sillyLogger will print all messages sillyLogger = new LogT(5); // or sillyLogger = new LogT("silly"); // if included via HTML script sillyLogger = createLogger(5); // or sillyLogger = createLogger("silly"); If any other value is supplied to the constructor, a default value of none is used. Conclusion In creating or building your colorful logger, this is the best library to use for your framework.