The final step in the SDLC, and arguably the most crucial, is the testing, deployment, and maintenance of development environments and applications. DZone's category for these SDLC stages serves as the pinnacle of application planning, design, and coding. The Zones in this category offer invaluable insights to help developers test, observe, deliver, deploy, and maintain their development and production environments.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
A developer's work is never truly finished once a feature or change is deployed. There is always a need for constant maintenance to ensure that a product or application continues to run as it should and is configured to scale. This Zone focuses on all your maintenance must-haves — from ensuring that your infrastructure is set up to manage various loads and improving software and data quality to tackling incident management, quality assurance, and more.
Modern systems span numerous architectures and technologies and are becoming exponentially more modular, dynamic, and distributed in nature. These complexities also pose new challenges for developers and SRE teams that are charged with ensuring the availability, reliability, and successful performance of their systems and infrastructure. Here, you will find resources about the tools, skills, and practices to implement for a strategic, holistic approach to system-wide observability and application monitoring.
The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
Microservices and Containerization
According to our 2022 Microservices survey, 93% of our developer respondents work for an organization that runs microservices. This number is up from 74% when we asked this question in our 2021 Containers survey. With most organizations running microservices and leveraging containers, we no longer have to discuss the need to adopt these practices, but rather how to scale them to benefit organizations and development teams. So where do adoption and scaling practices of microservices and containers go from here? In DZone's 2022 Trend Report, Microservices and Containerization, our research and expert contributors dive into various cloud architecture practices, microservice orchestration techniques, security, and advice on design principles. The goal of this Trend Report is to explore the current state of microservices and containerized environments to help developers face the challenges of complex architectural patterns.
Getting Started With OpenTelemetry
DevOps has become extremely popular in recent years. As a result, companies are projected to spend nearly $58 billion on DevOps technology by 2030. Unfortunately, some companies have difficulty effectively managing their DevOps strategy because they lack the storage space to host the requisite tools. One of the best things you can do is store your DevOps applications in the cloud. This will make it easier to ensure you have the storage space needed to handle the DevOps process more seamlessly. This includes using CloudSecOps and DevOps as a Service. While cloud computing offers a number of phenomenal advantages to DevOps developers, they often run into issues since they fail to utilize the right cloud solutions. Fortunately, choosing the right cloud services is easy if you have done your due diligence. Choose the Right Cloud Hosting Services as a DevOps Developer Choosing the right cloud storage provider is essential for any DevOps team to ensure safe and reliable data storage and easy access to all the necessary files. However, since so many cloud storage providers are available, choosing the best can be overwhelming. Google Cloud has a number of excellent services for DevOps teams. Google Cloud actually has a cloud-build service specifically for DevOps engineers. As a result, DevOps teams can seamlessly manage their workflows with continuous integration and continuous deployment pipelines. Developers don’t even need to add additional third-party applications to make this possible. However, some DevOps engineers prefer to host their applications on their own servers because this provides greater autonomy. However, choosing a cloud hosting platform for a DevOps project is easier said than done. One of the first things to consider is how much space you need. Different providers offer varying amounts of space depending on their plans, so it is important to determine how much space will be required before deciding. DevOps as a service project require extensive storage space, so teams need to be realistic about how much space is necessary. Additionally, look at the cost and features offered by each provider. Next, check out what security measures they have in place. While DevOps code is inherently secure, it is still important to ensure the space the project is hosted on is also secure. An insecure workspace exponentially increases the risk of a data breach. When choosing the best cloud storage providers, security should always be a priority. Ensure they have strong encryption protocols and authentication processes to protect your data from unauthorized access or malicious attacks. Additionally, all DevOps projects should be encrypted to minimize the risk of a data breach. Finally, read reviews from other users who have used the service before making your final decision – this will give you an idea of how reliable and user-friendly each provider is. With these tips in mind, you should be able to easily compare the top cloud storage providers and choose one that meets all of your requirements without breaking the bank! Are There Any Features or Services That Make One Provider Stand Out From the Rest? There are certain features and services that can make a cloud storage provider splendid and stand out from the rest. For example, some providers offer 24/7 customer support, while others offer more competitive pricing. Additionally, some providers may offer additional services. Therefore, it is imperative to do thorough research before making a decision so you can find the one that best meets your needs and budget. Also, look for reviews from other customers to get an idea of how reliable and helpful the provider is. This will help you make an informed decision when selecting a service provider. How to Evaluate the Security Measures of Cloud Storage Providers When evaluating the security measures of cloud storage providers, it’s important to consider a few key factors. First, look into the provider’s data encryption methods to protect your DevOps applications. Ensure all data is encrypted both in transit and at rest, using strong algorithms such as AES-256 or higher. You can also check if additional security is given, such as two-factor authentication. This will help protect your account from unauthorized access even if someone manages to guess your password. DevOps projects should always be backed. Therefore, it’s also important to make sure that the provider has a reliable backup system in place, so you can easily recover any lost or corrupted files. Finally, research their track record when it comes to security breaches and other incidents – this will give you an idea of how secure their systems really are. Identifying User-Friendly Interfaces Among Different Cloud Storage Platforms There are some considerations to be made when it comes to identifying user-friendly options. One of them is the interface; it should definitely be intuitive and easy to navigate. Also, it needs a clean design that is visually appealing and not overly cluttered with unnecessary features. Also, the platform should offer helpful tutorials or guides for newbies. Furthermore, it should provide flawless and professional customer support in case of any nasty technical issues or questions that arise. Finally, it’s important to ensure the platform offers secure data encryption and other security measures to protect user data from potential threats. Considering all these factors when evaluating different cloud storage platforms, you can easily identify which offers user-friendly interfaces that meet your needs. Understanding Compliance Requirements When Selecting a Cloud Storage Provider Compliance requirements matter a lot when it comes to the selection of a cloud storage provider. Depending on the type of data you are storing, there may be regulations and standards that need to be followed. For example, suppose you are storing sensitive customer information such as credit card numbers or health records. In that case, you will need to ensure that your cloud storage provider meets all applicable security and privacy laws. Moreover, some industries have their own set of compliance requirements that must be met in order for a cloud storage provider to be used. Therefore, it is important to research these requirements before selecting a provider to ensure they meet all necessary criteria. Furthermore, it is also important to consider the cost associated with meeting these compliance requirements when selecting a cloud storage provider. For example, some providers may charge extra fees for meeting certain standards or providing additional security measures. Make sure you factor this into your decision-making process when choosing a cloud storage provider so that you can select one that fits within your budget while still meeting all necessary compliance requirements. Choose the Right Cloud Solutions for Your DevOps Projects A growing number of DevOps teams are hosting their applications on the cloud. While you can utilize third-party services such as Google Cloud, it may be better to host your applications on a private cloud server instead. It is important to consider the features and pricing of each cloud storage provider to determine which one is best suited for your unique needs. When deciding, consider security, reliability, scalability, and customer support.
The evolution of software engineering over the last decade has led to the emergence of numerous job roles. So, how different is a software engineer, DevOps engineer, site reliability engineer, and cloud engineer from each other? In this article, we drill down and compare the differences between these roles and their functions. Image Source Introduction As the IT field has evolved over the years, different job roles have emerged, leading to confusion over the differences between site reliability engineer vs. software engineer vs. cloud engineer vs. DevOps engineer. For some people, they all seem similar, but in reality, they are somewhat different. The main idea behind all these terms is to bridge the gap between the development and operation teams. Even though these roles are correlated, what makes them different is the scope of the role. What Is Software Engineering? The traditional role of a software engineer is to apply the principle of engineering to software development. This includes the use of programming languages to create, analyze, and modify the existing software and design and test the user application. A person doing the job of a software engineer usually has a bachelor’s degree in science or software engineering and has experience in computer systems and languages (like Python, C, Go, JavaScript, etc.). This is what the typical day of a software engineer looks like: Analyze the user requirement. Do coding based on the user requirement. Perform maintenance tasks and integrate the application with the existing system. Doing Proof of Concept (POC) on new technology before implementing it. Executing and developing the project plan. So, at a high-level, a software engineer’s role is to architect applications, develop code, and have processes in place to create solutions for customers. Now you understand what a software engineer is and what their role is. In the next section, let’s try to understand the difference between software vs. DevOps engineers. Software Engineer vs DevOps Back in the day, software engineers and operations had a lot of contention. Software engineers pass their code to the system admin, and it’s the system admin’s responsibility to keep that code running in production. The software engineer had little knowledge of the operation practices, and the system admin had little knowledge about the codebase. Software engineers were concerned with shipping code, and the system admin was concerned about reliability. On the one hand, software engineers want to move faster to get their features out more quickly. On the other hand, system admin’s want to drive slower to keep things reliable. This kind of misalignment often caused tension within the organization. Here enters DevOps, a set of practices and a culture designed to break down these barriers between software engineers, System admins and other parts of the organization. DevOps is made of two words of Dev and Ops, namely development and operations, and it’s the practice to allow the single team to manage the entire application development lifecycle, that is, development, testing, deployment, monitoring, and operation. They achieve that by frequently releasing small changes by using continuous integration and continuous deployment (CI/CD). DevOps is broken down into five main areas: Reduce organization silos: By breaking down barriers across teams, we can increase collaboration and throughput. Accept failure as normal: Computers are inherently unreliable, so we can’t expect perfection, and when we introduce humans into the system, we can expect more imperfection. Implement gradual changes: Not only are small incremental changes easier to review, but if a gradual change introduces a bug in production, it allows us to reduce the mean time to recover and make it simple to roll back. Leverage tooling and automation: Reduce manual work by automating as much as possible. Measure everything: Measurement is a critical gauge for success, and without a way to measure if our first four pillars were successful, we would have no way of knowing if they were. DevOps vs SRE If we think of DevOps as a philosophy, Site Reliability Engineering (SRE) is a prescriptive way of accomplishing that philosophy. So if DevOps were an interface in a programming language, then SRE is a concrete class that implements DevOps. In DevOps, when we talk about eliminating organization silos, SRE shares ownership of production with developers. SRE uses the same tools as DevOps to ensure everyone has the same view and exact approach to working in production. SRE has a blameless postmortem in accepting incidents and failure, which ensures that the failure that happens in production doesn’t have to be the same way more than once. SRE accepts the failures as normal by encoding a concept of an error budget of how much system is allowed to go out of spec. SRE follows the philosophy of canary release in terms of gradual changes, where the release changes only a small percentage of the fleet before it’s been moved to all the users. In terms of tooling and automation, the main idea is to eliminate manual work as much as possible. For measuring everything, SRE measures the health and reliability of the system. As an SRE, you must have a strong background in coding, but you should have the basics covered on Linux, Kernel, Network, and computer science. To sum up, SRE and DevOps are not two competing methods, but close friends designed to break down organizational barriers to deliver better and faster software. Both of them intend to keep the application up and running so that the user is not impacted. On the one hand, SRE is more applicable to production environments (as it’s the combination of software engineering plus system admin). In contrast, DevOps is more for non-production environments (sometimes in production). Their main task is to keep the environment up and running and automate as much as possible. What Skills Does DevOps or SRE Need? These are some of the technical skills companies are looking for when hiring DevOps or SRE. Operating system fundamentals: This mainly includes Linux as most of the server market is dominated by Linux (only a handful of companies use Windows as a server in the production environment). Programming skills: This is one of the must-have skills as you want to automate as much as possible, and the only way you can achieve that is using a programming language. Most engineers use Python or Shell for automation, but where speed is the key, GO language is the ultimate choice. Networking knowledge: As most companies migrate to the cloud and most of the heavy lifting is done by the cloud provider, you should have basic networking knowledge. Cloud knowledge: As mentioned earlier, as most companies migrate to the cloud, you should be familiar with at least one cloud provider like AWS, GCP, or Azure. Standard tools: This is job-specific, but with the current industry trend, you should be familiar with all the modern DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, and the list goes on and on. As mentioned earlier, this is job-specific and depends upon the current project requirement and scope. So in the modern context, an SRE/DevOps engineer is a software engineer whose focus area is infrastructure and operations. They take care of the operational tasks and automate it, which, in the past, was taken care of by the operations team, often manually. Cloud Engineer SRE and DevOps is standard practice, whereas the cloud engineer role is specific to the cloud, e.g., AWS, Google Cloud, Azure, etc. The cloud engineer role is delivery and optimization of IT service and workload running in the cloud. The advantages of using cloud in your organization are: Cost: As the number of public cloud providers increases and with the cutthroat competition, the organization benefits from it as all cloud providers try to slash their offering prices to compete. Maintenance: Also, the companies using the cloud need not worry about maintaining an expensive onsite network or system architecture. Instead, they can collaborate with the cloud service provider to get support for all servers and networking needs. Scalability: Using the cloud has other advantages like getting infinite storage and processing power; but obviously, it incurs costs. Cloud engineer roles can be specific to architecting (designing cloud solutions), administration (making sure the system is up and running all the time), or development (coding to automate cloud resources). Some of the responsibilities of a cloud engineer are as follows: Migrate on-premise application to the cloud Configuration of resources and components like security, databases, servers, etc. Deploying the application in the cloud. Monitoring the application in the cloud. Types of Cloud Engineers There are three main types of cloud engineers: Solution architect: The role of a solution architect is responsible for migrating organization applications to the cloud. They are responsible for the design and deployment of cloud applications and cost optimization. Cloud developers: A cloud developer is responsible for developing a cloud-native in the cloud. They are responsible for developing, deploying, and debugging applications in the cloud. SysOps engineer: A SysOps engineer role is similar to the system administrator, and they are responsible for deploying and maintaining the application in the cloud. A cloud engineer needs to combine SRE/DevOps/software engineer in an ideal situation but specialize in cloud services. But in reality, there is still a skill shortage in the cloud field. Cloud engineers specialize in one area, either they are good developers, or they know cloud services well. Due to this hindrance and skill shortage, some companies resist moving to the cloud and still have the workload running in the on-premise data center. The only way to fill this gap is for companies to train their employees in all aspects. Cloud engineers need to grasp programming skills and vice-versa. Wrapping Up Whatever practice you are following in your organization, the main idea is to break the silos, increase collaboration and transparency. Any practice you are following needs to find an innovative way to develop better and reliable software. As the IT field progresses, these practices will continue to evolve and new roles will be born.
Companies are using DevOps to respond quickly to changing market dynamics and customer requirements. In any case, the biggest bottleneck in implementing a successful DevOps framework is testing. Many QA organizations leverage DevOps frameworks but still prefer to test their software manually. Unfortunately, this means less visibility and project backlogs, eventually leading to project delays and cost overruns. Smaller budgets and the desire for faster delivery have fueled the need for better approaches to development and testing. With the right testing principles, DevOps can help shorten the software development lifecycle (SDLC), but not without costly mistakes. Many organizations are adapting their traditional sequential approach to software development to be better equipped to test earlier and at all stages. Everyday Test Automation Challenges Development Time – Many companies think about developing their test automation frameworks in-house, but this is usually not a good idea because it is time-consuming and would cost you significant capital to develop it from scratch. Learning Curve – Companies that use code-based open-source tools like Selenium rely on tech-savvy people to manage their test automation framework. This is a big problem because non-technical business users may find it difficult and time-consuming to learn the tools. In addition, technical users and teams have more important tasks to perform than testing. Maintenance Costs – Most test automation tools use static scripts, which means they cannot quickly adapt to changes that occur due to UI changes in the form of new screens, buttons, user flows, or user input. What Is the Shift-Left Strategy? It is part of an organizational pattern known as DevSecOps (a collaboration between development, security, and operations) that ensures application security at the earliest stages of the development lifecycle. The term “shift left” refers to moving a process left on the traditional linear depiction of the software development lifecycle (SDLC). In DevOps, security and testing are two of the most commonly discussed topics for shifting left. Shift Left Testing Testing applications was traditionally done at the end of development before they were sent to security teams. Applications that did not meet quality standards, did not function properly, or otherwise did not meet requirements, would be sent back into development for additional changes. It resulted in significant bottlenecks during the SDLC and was incompatible with DevOps methodologies, which emphasize development velocity. As a result of shift left testing, defects can be identified and fixed much earlier in the software development process. This streamlines the development cycle, dramatically improves quality, and enables faster progression for security analysis and deployment to later stages. Shift-Left Security Security testing has become a standard practice in recent years following application testing in the development cycle. At this point, various types of analysis and security testing would be conducted by security teams. Security testing results will determine whether the application can be deployed into production or if it must be rejected and returned to developers for remediation. Due to this, long delays in development occurred, or the risk of releasing software without necessary security measures increased. Shifting security left means incorporating security measures throughout the development lifecycle rather than at the end. By shifting security left, the software is designed with security best practices integrated. Potential security issues and vulnerabilities are identified and fixed as early as possible in the development process, making addressing security issues easier, faster, and more affordable. It is no secret that IT has shifted left over the last two decades. It is possible to operate development infrastructure on a self-service basis today because it is fully automated: With AWS, GCP, or Azure, developers can easily provision resources without involving IT or operations. CI/CD processes automatically create, stage, and deploy test, staging, and production environments in the cloud or on-premises and tear them down when they are no longer required. CloudFormation and Terraform are widely used to deploy environments declaratively using Infrastructure-as-Code (IaC). With Kubernetes, organizations can provision containerized workloads dynamically using adaptive, automated processes. As a result of this shift, development productivity and velocity have increased tremendously, raising serious security concerns. The fast-paced environment leaves hardly any time for post-development security reviews or analysis of cloud infrastructure configurations. As a result, it is often too late to fix problems that are discovered before the next development sprint. What Is the Shift-Left Testing Principle? When developers test early in the development cycle, they can catch problems early and address them before they reach the production environment. By discovering issues earlier, developers don’t waste time applying workarounds to flawed implementations, and operations teams don’t have to maintain faulty applications. To improve the quality of an application, developers can identify the root cause of issues and modify the architecture or underlying components. The shift left approach to testing pushes testing to the left, or the earlier stages of the pipeline. By doing this, teams can find and fix bugs as soon as possible during the development process. In addition to increasing collaboration between testers and developers, shift left testing makes identifying key aspects that need testing early in development a whole lot easier. A major benefit of shifting testing is that testers are involved in the whole cycle, including the planning phase. Testing becomes part of the developer’s day-to-day activities as they become competent in automated testing technologies. Software is designed from the ground up with quality in mind when testing is part of the organization’s DNA. Benefits of Implementing Shift-Left Strategy A key benefit of “shift-left” testing is that it reduces overall development time. However, two key DevOps practices must be implemented to shift left: continuous testing and continuous deployment. Increased Speed of Delivery It’s not rocket science that the sooner you start, the sooner you finish. Therefore, identifying critical bugs early in the Software Development Cycle allows you to fix them sooner and more efficiently. The result is a significant decrease in the amount of time between releases and a faster delivery time. Improved Test Coverage Starting with the test execution right at the start of the development process, all software features, functionalities, and performance can be quickly evaluated. Test coverage percentages increase naturally when shift left testing is performed. The overall quality of the software is significantly enhanced by increased test coverage. Efficient Workflow Ultimately, shifting left is worth the effort and time it takes to implement. This allows the QA team to go deeper into the product and implement innovative testing solutions. Furthermore, it allows the testing team to become more comfortable with the tools and techniques involved. In addition, shift left testing simplifies several aspects of software development. Lower Development and Testing Cost Debugging is one of the most difficult aspects of software development. Usually, the cost of fixing a bug increases significantly as the software progresses in SDLC. Therefore, the earlier you find your bugs, the easier they are to fix. Let us take the example of a payment app that discovers a security vulnerability only after the release of its latest app version. Sure, it would have cost some more if the team had found the vulnerability earlier in development. But now, the company will have to spend significantly more time, effort, and money to fix the problem. In addition, the complexity of implementing changes in a production environment makes it difficult to do anything after the fact, not to mention the associated total cost of late maintenance.Gartner estimates the cost of network outages at $5,600 per minute – a total of over $300,000 per hour. Improves Product Quality The shift left testing approach positively impacts overall code quality with rigorous and frequent code quality checks. In addition, it facilitates timely correspondence between stakeholders, developers, and testers and ensures timely feedback, which helps improve code quality. This means that your customers receive a stable and high-quality end product. Also, you can listen to Siddharth Kaushal, who shared an idea of using shift-left testing and how automation tools can aid the idea of shift-left testing to make the process easily consumable by agile teams. Conclusion Once you mix Shift Left with leading DevOps practices – Continuous Testing and Continuous Development – you lay the foundation for Shift Left to win. Moreover, Shift Left is essential in a DevOps environment because. Teams discover and report bugs quickly. Features are released quickly. The quality of the software is outstanding.
Since organizations began migrating from building monolith applications to microservices, containerization technology has been on the rise. With applications running on hundreds and thousands of containerized environments, an effective tool to manage and orchestrate those containers became essential. Kubernetes (K8s)—an open-source container orchestration tool from Google—became popular with features that improved the deployment process for companies. With its high flexibility and scalability features, Kubernetes has emerged as the leading container orchestration tool, and over 60% of companies have already adopted Kubernetes in 2022. With more and more companies adopting containerization technology and Kubernetes clusters for deployment, it makes sense to implement CI/CD pipelines for delivering Kubernetes in an automated fashion. So in this article, we’ll cover the following: What is a CI/CD pipeline? Why should you use CI/CD for Kubernetes? Various stages of Kubernetes app delivery. Automating CI/CD process using open source Devtron platform. What Is a CI/CD Pipeline? Continuous Integration and Continuous Deployment (CI/CD) pipeline represent an automatic workflow that continuously integrates code developed by software developers and deploys them into a target environment with less human intervention. Before CI/CD pipelines, developers were manually taking the code, building it into an application, and then deploying it into testing servers. Then, on the approval from testers, developers would throw their code off the wall for the Ops team to deploy the code into production. The idea somewhat worked fine with monolithic applications when deployment frequency was once in a couple of months. But with the advent of microservices, developers started building smaller use cases faster and deployed them frequently. The process of manually handling the application after the code commit was repetitive, frustrating, and prone to errors. This is when agile methodologies and DevOps principles flourished with CI/CD at its core. The idea is to build and ship incremental changes into production faster and more frequently. A CI/CD pipeline made the entire process automatic, and high-quality codes were shipped to production quickly and efficiently. The Two Primary Stages of a CI/CD Pipeline 1. Continuous Integration or CI Pipeline The central idea in this stage is to automatically build the software whenever new software is developed by developers. If developments happen every day, there should be a mechanism to build and test it every day. This is sometimes referred to as the build pipeline. The final application or artifact is then pushed to a repository after multiple tests. 2. Continuous Deployment or CD Pipeline The continuous deployment stage refers to pulling the artifact from the repository and deploying it frequently and safely. A CD pipeline is used to automate the deployment of applications into the test, staging, and production environments with less human intervention. Note: Another term people use interchangeably when referring to Continuous Deployment is Continuous Delivery, but they’re not the same. As per the book Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation by David Farley and Jez Humble, it’s the process of releasing changes of all types—including new features, configuration changes, bug fixes, and experiments—into production, or into the hands of users, safely and quickly in a sustainable way. Continuous Delivery comprises the entire software delivery process, i.e., planning, building, testing, and releasing software to the market continuously. It’s assumed that continuous integration and continuous deployment are the two parts of it. Now, let us discuss why to use CI/CD pipelines for Kubernetes applications. Benefits of Using a CI/CD Pipeline for Kubernetes By using a CI/CD pipeline for Kubernetes, one can reap several crucial benefits: Reliable and Cheaper Deployments A CI/CD pipeline deploying on Kubernetes facilitates controlled release of the software, as DevOps Engineers can set up staged releases, like blue-green and canary deployments. This helps in achieving zero downtime during release and reduces the risk of releasing the application to all users at once. Automating the entire SDLC (Software Development Lifecycle) using a CI/CD pipeline aids in lowering costs by cutting many fixed costs associated with the release process. Faster Releases Release cycles that used to take weeks and months to complete have significantly come down to days by implementing CI/CD workflows. In fact, some organizations even deploy multiple times a day. Developers and Ops team can work together as a team with CI/CD pipeline and quickly resolve bottlenecks in the release process, including re-works that used to delay releases. With automated workflows, the team can release apps frequently and quickly without getting burnt out. High-Quality Products One of the major benefits of implementing a CI/CD pipeline is that it helps to integrate testing continuously throughout the SDLC. You can configure the pipelines to stop proceeding to the next stage if certain conditions are not met, such as failing the deployment pipeline if the artifact has not passed functional or security scanning tests. Due to this, issues are detected early, and the chances of having bugs in the production environment become slim. This ensures that quality is built into the products from the beginning itself, and the end users get better products. Fig C: benefits of implementing CI/CD pipelines Figure C illustrates the benefits of implementing CI/CD pipelines. Now, let’s dive deeper into various stages of Kubernetes app delivery, which can be made a part of the CI/CD pipeline. Stages of Kubernetes App Delivery Below are the different stages involved in a CI/CD pipeline of Kubernetes application delivery and the tools developers and DevOps teams use at each stage. Fig D: Stages of Kubernetes App Delivery. Figure D represents the stages of Kubernetes app delivery. Code Process: Coding is the stage where developers write codes for applications. Once new codes are written, they’re pushed into central storage on a remote repository, where application codes and configurations are stored. It’s a shared repository among developers, and they continuously integrate code changes in the repository, mostly daily. These changes in the code repository trigger the CI pipeline. Tools: GitHub, GitLab, BitBucket Build Process: Once changes are made in the application code repository, it’s then packaged into a single executable file called an artifact. This allows flexibility in moving the file around until it’s deployed. The process of packaging the application and creating an artifact is called building. The built artifact is then made into a container image that would be deployed on Kubernetes clusters. Tools: Maven, Gradle, Jenkins, Dockerfile, Buildpacks Test Process: Once the container image is built, the DevOps team will ensure it undergoes multiple functional tests such as unit tests, integration tests, and smoke tests. Unit tests ensure small pieces of codes (units), like functions, are working properly. Integration tests look for how different components of codes, like different modules, are holding up together as a group. Finally, smoke tests check if the build is stable enough to proceed. After the functional tests are done, there will be another sub-stage for testing and verifying security vulnerabilities. DevSecOps would execute two types of security tests, i.e., Static application security testing (SAST) and dynamic application security testing (DAST), to detect problems such as container images containing vulnerable packages. After passing all the functional and security tests, the image is then stored in a container image repository. Tools: JUnit, Selenium, Claire, SonarQube All the above steps make up the CI or build pipeline. Deploy Process: In the deployment stage, a container image is pulled from the registry and deployed into a Kubernetes cluster running in testing, pre-production, or production environments. Deploying the image into production is also called a release, and the application will then be available for the end users. Unlike VM-based monolithic apps, deployment is the most challenging part of Kubernetes because of the following reasons: Developers and DevOps engineers have to handle many Kubernetes resources for successful deployment. As there are various ways of deployment, such as using declarative manifest files and HELM charts, enterprises rarely follow a standardized way to deploy their applications. Multiple deployments of large distributed systems every day can be really frustrating work. Tools: Kubectl, Helm Charts For interested users, we’ll show the steps involved in the simple process of deploying an NGINX image into K8s. (Feel free to skip the working example.) Deploying Nginx in Kubernetes Cluster (Working Example) Before deploying, you have to create resource files in K8s so that your application will run in containers. And there are many resources, such as Deployments, ReplicaSets, StatefulSet, Daemonset, Services, Configmap, and many other custom resources. We’ll look at how to deploy into Kubernetes with the bare minimum resources or manifest files: Deployment and Service. A Deployment workload resource provides declarative updates and describes the desired state, like replication and scaling of pods. A Service in Kubernetes uses the IP addresses of the Pods to load balance traffic to the pod replicas. For testing this, you should have a K8s cluster running on a server or locally using Minikube or Kubeadm. Now, let’s deploy Nginx to a Kubernetes cluster. The Deployment YAML for Nginx—let’s name it nginx-deployment.yaml—would look like this: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 10 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 The Deployment file specifies the container image (which is nginx), declares the desired number of Pods (10), and sets the container port to 80. Now, create a Service file with the name nginx-service.yaml and paste the below code: apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - name: http port: 80 targetPort: 80 type: ClusterIP Once you configure the manifest files, deploy them using the following commands: # kubectl apply -f nginx-deployment.yaml # kubectl apply -f nginx-service.yaml The code will deploy the Nginx server and make it accessible from outside the cluster. You can see the Pods by running the following command: # kubectl get pods Also, you can run the following command and get the service’s IP address to access Ngnix from a browser. # kubectl get svc If you have noticed, there are multiple steps and configurations one needs to perform while deploying an application. Just imagine the drudgery of the DevOps team when they are tasked to deploy multiple applications into multi clusters every day. Monitoring, Health-Check, and Troubleshooting Process: After the deployment, it’s extremely important to monitor the health of new Pods in a K8s cluster. The Ops team may manually log into a cluster, and use commands, such as kubectl get pods or kubectl describe deployment, to determine the health of newly deployed Pods in a cluster or a namespace. In addition, ops teams may use monitoring and logging tools to understand the performance and behavior of pods and clusters. To troubleshoot and find issues, matured Kubernetes users will use advanced mechanisms like Probes. There are three kinds of probes: liveness probes to ensure an application is running, readiness probes to ensure the application is ready to accept traffic, and startup probes to ensure an application or a service running on a Pod has completely started. Although there are many commands and ways, it can be very difficult for developers or Ops teams to troubleshoot and detect an error because of poor visibility of a lot of components inside a cluster (node, pod, controller, security and deployment objects, etc.). Tools: AppDynamics, Dynatrace, Prometheus, Splunk, kubectl Progressive Delivery and Rollback Process: People use advanced deployment strategies like blue-green and canary to roll out their applications gradually and avoid degradation to customer experience. This is also known as progressive delivery. The idea is to allow a small portion of traffic to the newly deployed pods and perform quality and performance regression. And if the newly deployed application is healthy, then the DevOps team will gradually roll it forward. But in case there’s an issue in performance or quality, the Ops or SRE team instantly rolls back the application to its older version to ensure there’s a zero-downtime release process. Tools: Kubectl, Kayenta, etc. Feedback and Notification Process: Feedback is the heart of any CI/CD process because everybody should know what’s happening in the software delivery and deployment process. The best way to ensure effective feedback is to measure the efficacy of the CI/CD process and notify all the stakeholders in real-time. In case of failures, it helps DevOps and SREs to quickly create incidents in service management tools for further resolution. For example, project managers and business owners would be interested to know if a new feature has been successfully rolled out to the market. Similarly, DevOps would like to know the status of new deployments, clusters, and pods, and SREs would like to be intimated about the health and performance of a new application deployed into production. Tools: Slack, Discord, MS Teams, JIRA, and ServiceNow Note: All the stages—Monitoring, Progressive Delivery, and Feedback and Notification—fall under Continuous Deployment. If you’re large or mid-enterprise with tens or hundreds of microservices based on Kubernetes, then you need to serialize your delivery (CI/CD) process using pipelines.
AI-powered test automation tools are the next frontier for testers familiar with traditional methods. With a range of features and benefits, these solutions help you break new ground (and save time) when it comes to test automation. Let’s dive into this chapter to discover what they are and how they will help your team be more efficient, accurate, and transparent! Perfecto Scriptless Perfecto Scriptless is another solution that enhances Selenium automation — and Selenium is open-source. It promises to be a quick onboarding tool with no coding required that can be used for integration, usability, and performance testing. Perfecto can work with Jenkins, TeamCity, Jira, and GitLab, among others. The key selling point of Perfecto Scriptless is the low barrier of entry. Boasting the no-code approach, this is designed to be one of the most straightforward tools for implementing AI in automation testing. It is made to be even easier than regular test automation with Selenium. The Perfecto Scriptless scriptless regression testing tool solves many of the problems that testers have with their existing tools. While some testers may find the learning curve too steep to use this tool effectively, others will appreciate the functionality and ease of use. Testim Testim is a powerful test automation solution that AI underpins. Born from Microsoft’s accelerator program and acquired by Tricentis in 2022, Testim has continued to innovate and achieve success since its launch. Testim is an automated testing tool backed by Selenium and thus provides an option of sticking with or maintaining your existing automation framework. It aims to aid experienced engineers in writing automated tests and has over 15 integrations with issue-tracking solutions and continuous integration software. This online platform has similar AI-powered testing as aqua. Testim has a 4.5-star rating on review websites, and their users seem happy with the interface overall, but there were reports of a non-intuitive user interface and limited mobile testing. Luckily Testim also has a free trial, which makes it easy to test out before deciding if you want to commit. ACCELQ ACCELQ is a no-code test automation platform with robust algorithms and built-in knowledge bases that quickly enable businesses to build high-quality automation frameworks. Since its launch in 2014, it has been adopted by Intel, Pfizer, and United Airlines, among others." The broad scope is the first thing that differentiates ACCELQ from other competitors. A tool provides dedicated solutions for web, mobile, and API testing. The native integration with issue lifecycle management tools makes it easy to create tests that cover all major frameworks, including React and AngularJS. The solution has some minor performance complaints, a couple of reports on lacking documentation, and a note about high upfront investment. Still, overall it's been given a score of 4.5 (out of 5) stars by its users. Applitools Applitools is an AI-powered test automation platform that provides solutions for UI testing. Since its launch in 2015, it has collaborated with Microsoft, Bank of America, and Adidas. Applitools offers visualized testing solutions to improve the quality of your software. Its cross-browser, cross-device grid allows you to test web and native mobile applications. It integrates with issue-tracking solutions and even competitor solutions, including Testim. While it scores in the 4.5 range for reviews, offering outstanding reliability and usability, it has some significant execution issues. For example, just running a simple test will take 80–100 seconds because of all the processes happening in the background. You’ll need to invest in creating your custom visualization and integration. Still, as a result of speeding up how quickly tests can be executed, overall, this could save you time. aqua ALM aqua ALM is a cloud-based solution that has been in business for over 20 years. The AI-supported software test management solution, available in paid and free versions, grew out of the need for a more efficient and secure tracking tool for testing tasks. It offers an intuitive UX, streamlined workflows, and relevant data analytics that make the ALM process more accessible and transparent to all project participants. The key AI functionality is generating entire test cases from requirements. aqua uses a large-scale natural language processing algorithm to turn plain text into a test. So about 40% of the AI-generated test cases don’t need human tweaks to be added to the test suite. QA specialists can create test steps themselves and make the AI fill out the test case description or do it vice versa. aqua is a powerful test management tool and was designed for agile teams. It has a rating of 4.5 stars on G2 and Capterra, with Google and the German government among its clients. The software boasts native integration options and can send/receive data from any other third-party tool via REST API Final Thoughts As AI technology becomes more widely used in various industries, including software testing, there will be more and better tools to help make the process easier. The branch will only grow as more AI algorithm licensing becomes available, leading to even better solutions down the road. If you have any other AI-powered test automation tools which I haven’t mentioned, drop them below.
The AWS console allows the user to create and update cloud infrastructure resources in a user-friendly manner. Despite all the advantages that such a high-level tool might have, using it is repetitive and error-prone. For example, each time we create a Lambda function using the AWS console, we need to repeat the same operations again and again, and, even if these operations are intuitive and easy as graphical widget manipulations, the whole process is time-consuming and laborious. This working mode is convenient for rapid prototyping, but as soon as we have to work on a real project with a relatively large scope and duration, it doesn't meet the team's goals and wishes anymore. In such a case, the preferred solution is IaC (Infrastructure as Code). IaC essentially consists of using a declarative notation in order to specify infrastructure resources. In the case of AWS, this notation expressed as a formalism based on a JSON or YAML syntax is captured in configuration files and submitted to the CloudFormation IaC utility. CloudFormation is a vast topic that couldn't be detailed in a blog ticket. The important point that we need to retain here is that this service is able to process input configuration files and guarantee the creation and the update of the associated AWS cloud infrastructure resources. While the benefits of the CloudFormation IaC approach are obvious, this tool has a reputation for being verbose, unwieldy, and inflexible. Fortunately, AWS Lambda developers have the choice of using SAM, a superset of CloudFormation which includes some special commands and shortcuts aiming at easing the development, testing, and deployment of the Java serverless code. Installing SAM Installing SAM is very simple: one only has to follow the guide. For example, installing it on Ubuntu 22.04 LTS is as simple as shown below: Shell $ sudo apt-get update ... $ sudo apt-get install awscli ... $ aws --version aws-cli/2.9.12 Python/3.9.11 Linux/5.15.0-57-generic exe/x86_64.ubuntu.22 prompt/off Creating AWS Lambda Functions in Java With SAM Now that SAM is installed on your workstation, you can write and deploy your first Java serverless function. Of course, we assume here that your AWS account has been created and that your environment is configured such that you can run AWS CLI commands. Like CloudFormation, SAM is based on the notion of template, which is a YAML-formatted text file that describes an AWS infrastructure. This template file, which named by default is template.yaml, has to be authored manually, such that to be aligned with the SAM template anatomy (complete specifications can be found here). But writing a template.yaml file from scratch is difficult; hence, the idea of automatically generating it. Enters CookieCutter. CookieCutter is an open-source project allowing automatic code generation. It is greatly used in the Python world but here, we'll use it in the Java world. Its modus operandi is very similar to one of the Maven archetypes, in the sense that it is able to automatically generate full Java projects, including but not limited to packages, classes, configuration files, etc. The generation process is highly customizable and is able to replace string occurrences, flagged by placeholders expressed in a dedicated syntax, by values defined in an external JSON formatted file. This GitHub repository provides such a CookieCutter-based generator able to generate a simple but complete Java project, ready to be deployed as an AWS Lambda serverless function. The listing below shows how: Shell $ sam init --location https://github.com/nicolasduminil/sam-template You've downloaded /home/nicolas/.cookiecutters/sam-template before. Is it okay to delete and re-download it? [yes]: project_name [my-project-name]: aws-lambda-simple aws_lambda_resource_name [my-aws-lambda-resource-name]: AwsLambdaSimple java_package_name [fr.simplex_software.aws.lambda.functions]: java_class_name [MyAwsLambdaClassName]: AwsLambdaSimple java_handler_method_name [handleRequest]: maven_group_id [fr.simplex-software.aws.lambda]: maven_artifact_id [my-aws-function]: aws-lambda-simple maven_version [1.0.0-SNAPSHOT]: function_name [AwsLambdaTestFunction]: AwsLambdaSimpleFunction Select architecture: 1 - arm64 2 - x86_64 Choose from 1, 2 [1]: timeout [10]: Select tracing: 1 - Active 2 - Passthrough Choose from 1, 2 [1]: The command sam init above mentions the location of the CookieCutter-based template used to generate a new Java project. This generation process takes the form of a dialog where the utility is asking questions and accepting answers. Each question has default responses and, in order to accept them, the user just needs to type Enter. Everything starts by asking about the project name and we chose aws-lambda-simple. Further information to be entered is: AWS resource name Maven GAV (GroupId, ArtifactId, Version) Java package name Java class name Processor architecture Timeout value Tracing profile As soon as the command terminates, you may open the new project in your preferred IDE and inspect the generated code. Once finished, you may proceed with a first build, as follows: Shell $ cd aws-lambda-simple/ nicolas@nicolas-XPS-13-9360:~/sam-test/aws-lambda-simple$ mvn package [INFO] Scanning for projects... [INFO] [INFO] ----------< fr.simplex-software.aws.lambda:aws-lambda-simple >---------- [INFO] Building aws-lambda-simple 1.0.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ aws-lambda-simple --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/nicolas/sam-test/aws-lambda-simple/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ aws-lambda-simple --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1 source file to /home/nicolas/sam-test/aws-lambda-simple/target/classes [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ aws-lambda-simple --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/nicolas/sam-test/aws-lambda-simple/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ aws-lambda-simple --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ aws-lambda-simple --- [INFO] No tests to run. [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ aws-lambda-simple --- [INFO] Building jar: /home/nicolas/sam-test/aws-lambda-simple/target/aws-lambda-simple-1.0.0-SNAPSHOT.jar [INFO] [INFO] --- maven-shade-plugin:3.2.1:shade (default) @ aws-lambda-simple --- [INFO] Replacing /home/nicolas/sam-test/aws-lambda-simple/target/aws-lambda-simple.jar with /home/nicolas/sam-test/aws-lambda-simple/target/aws-lambda-simple-1.0.0-SNAPSHOT-shaded.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.604 s [INFO] Finished at: 2023-01-12T19:09:23+01:00 [INFO] ------------------------------------------------------------------------ Our new Java project has been built and packaged as a JAR (Java ARchive). The generated template.yaml file defines the required AWS cloud infrastructure, as shown below: YAML AWSTemplateFormatVersion: 2010-09-09 Transform: AWS::Serverless-2016-10-31 Description: aws-lambda-simple Resources: AwsLambdaSimple: Type: AWS::Serverless::Function Properties: FunctionName: AwsLambdaSimpleFunction Architectures: - arm64 Runtime: java11 MemorySize: 128 Handler: fr.simplex_software.aws.lambda.functions.AwsLambdaSimple::handleRequest CodeUri: target/aws-lambda-simple.jar Timeout: 10 Tracing: Active This file has been created based on the value entered during the generation process. Things like the AWS template version and the transformation version are constants and should be used as such. All the other elements are known as they mirror the input data. Special consideration has to be given to the CodeUri element which specifies the location of the JAR to be deployed as the Lambda function. It contains the class AwsLambdaSimple below: Java public class AwsLambdaSimple { private static Logger log = Logger.getLogger(AwsLambdaSimple.class.getName()); public String handleRequest (Map<String, String> event) { log.info("*** AwsLambdaSimple.handleRequest: Have received: " + event); return event.entrySet().stream().map(e -> e.getKey() + "->" + e.getValue()).collect(Collectors.joining(",")); } } A Lambda function in Java can be run in the following two modes: A synchronous or RequestResponse mode in which the caller waits for whatever response the Lambda function returns An asynchronous or Event mode in which the caller is responded to without waiting, by the Lambda platform itself, while the function proceeds with the request processing, without returning any further response In both cases, the method handleRequest() above is processing the request, as its name implies. This request is an event implemented as a Map<String, String>. All right! Now our new Java project is generated and while the class AwsLambdaSimple presented above (which will be deployed in fine as an AWS Lambda function) doesn't do much, it is sufficiently complete in order to demonstrate our use case. So let's deploy our cloud infrastructure. But first, we need to create an AWS S3 bucket in order to store in it our Lambda function. The simplest way to do that is shown below: Shell $ aws s3 mb s3://bucket-$$ make_bucket: bucket-18468 Here we just created an S3 bucket having the name of bucket-18468. The AWS S3 buckets are constrained to have a unique name across regions. And since it's difficult to guarantee the uniqueness of a name, we use here the Linux $$ function which generates a random number. Shell sam deploy --s3-bucket bucket-18468 --stack-name simple-lambda-stack --capabilities CAPABILITY_IAM Uploading to 44774b9ed09001e1bb31a3c5d11fa9bb 4031 / 4031 (100.00%) Deploying with following values =============================== Stack name : simple-lambda-stack Region : eu-west-3 Confirm changeset : False Disable rollback : False Deployment s3 bucket : bucket-18468 Capabilities : ["CAPABILITY_IAM"] Parameter overrides : {} Signing Profiles : {} Initiating deployment ===================== Uploading to 3af7fb4a847b2fea07d606a80de2616f.template 555 / 555 (100.00%) Waiting for changeset to be created.. CloudFormation stack changeset ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Operation LogicalResourceId ResourceType Replacement ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Add AwsLambdaSimpleRole AWS::IAM::Role N/A + Add AwsLambdaSimple AWS::Lambda::Function N/A ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Changeset created successfully. arn:aws:cloudformation:eu-west-3:495913029085:changeSet/samcli-deploy1673620369/0495184e-58ca-409c-9554-ee60810fec08 2023-01-13 15:33:00 - Waiting for stack create/update to complete CloudFormation events from stack operations (refresh every 0.5 seconds) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ResourceStatus ResourceType LogicalResourceId ResourceStatusReason ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CREATE_IN_PROGRESS AWS::IAM::Role AwsLambdaSimpleRole - CREATE_IN_PROGRESS AWS::IAM::Role AwsLambdaSimpleRole Resource creation Initiated CREATE_COMPLETE AWS::IAM::Role AwsLambdaSimpleRole - CREATE_IN_PROGRESS AWS::Lambda::Function AwsLambdaSimple - CREATE_IN_PROGRESS AWS::Lambda::Function AwsLambdaSimple Resource creation Initiated CREATE_COMPLETE AWS::Lambda::Function AwsLambdaSimple - CREATE_COMPLETE AWS::CloudFormation::Stack simple-lambda-stack - ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Successfully created/updated stack - simple-lambda-stack in eu-west-3 Our Java class has been successfully deployed as an AWS Lambda function. Let's test it using the two invocation methods presented above. Shell $ aws lambda invoke --function-name AwsLambdaSimpleFunction --payload $(echo "{\"Hello\":\"Dude\"}" | base64) outputfile.txt { "StatusCode": 200, "ExecutedVersion": "$LATEST" } $ cat outputfile.txt "Hello->Dude" The listing above demonstrates the synchronous or RequestResponse invocation. We pass a JSON-formatted payload as the input event and, since its default format is Base64, we need to convert it first. Since the invocation is synchronous, the caller waits for the response which is captured in the file outputfile.txt. The returned status code is HTTP 200, as expected, meaning that the request has been correctly processed. Let's see the asynchronous or Event invocation. Shell $ aws lambda invoke --function-name AwsLambdaSimpleFunction --payload $(echo "{\"Hello\":\"Dude\"}" | base64) --invocation-type Event outputfile.txt { "StatusCode": 202 } This time the --invocation-type is Event and, consequently, the returned status code is HTTP 202, meaning that the request has been accepted but not yet processed. The file output.txt is empty, as there is no result. This concludes our use case showing the AWS Lambda functions deployment in Java via the SAM tool. Don't forget to clean up your environment before leaving by running: Shell $ aws s3 rm --recursive s3://bucket-18468 $ aws s3 rb --force s3://bucket-18468 $ aws cloudformation delete-stack --stack-name simple-lambda-stack Enjoy!
This week, we’ll discuss one of the harder problems in programming: threading. For many cases, threading issues aren’t as difficult to debug. At least, not in higher abstractions. Asynchronous programming is supposed to simplify the threading model but oftentimes it makes a bad situation worse by detaching us from the core context. We discuss why that is and how debuggers can solve that problem. We also explain how you can create custom asynchronous APIs that are almost as easy to debug as synchronous applications! Transcript Welcome back to the seventh part of debugging at scale where we don’t treat debugging like taking out the garbage. Concurrency and parallelism are some of the hardest problems in computer science. But debugging them doesn’t have to be so hard. In this section, we’ll review some of the IDE capabilities related to threading as well various tricks and asynchronous code features. Thread Views Let’s start by discussing some of the elements we can enable in terms of the thread view. In the stack frame, we can look at all the current threads in the combo box above the stack frame. We can toggle the currently selected thread and see the stack for that thread and the thread status. Notice that, here, we chose to suspend all threads on this breakpoint. If the threads were running, we wouldn’t be able to see their stack as it’s constantly changing. We can enable the threads view on the right hand side pull down menu to see more… As you can see, viewing the stack is more convenient in this state when we’re working with many threads. Furthermore, we can customize this view even more by going into the customize thread view and enabling additional options. The thread groups option is probably the most obvious change, as it arranges all the threads based on their groups and provides a pretty deep view of the hierarchy. Since most frameworks arrange their threads based on categories in convenient groups this is often very useful when debugging many threads. Other than that, we can show additional information such as the file name, line number, class name, and argument types. I personally like showing everything, but this does create a somewhat noisy view that might not be as helpful. Now that we switched on the grouping, we can see the hierarchy of the threads. This mode is a bit of a double-edged sword since you might miss out on an important thread in this case, but, if you have a lot of threads in a specific group, it might be the only way you can possibly work. I think we’ll see more features like this as project Loom becomes the standard and the thread count increases exponentially. I’m sure this section will see a lot of innovation moving forward. Debugging a Race Condition Next, we’ll discuss debugging race conditions. The first step of debugging a race condition is a method breakpoint. I know what I said about them, but in this case we need it. Notice the return statement in this method includes a lot of code. If I place a breakpoint on the last line, it will happen before that code executes and my coverage won’t include that part. So, let’s open the breakpoint dialog and expand it to the fully customizable dialog. Now we need to define the method breakpoint. I type the message and then get the thread name. I only use the method breakpoint for the exit portion because if I used it for both, I’d have no way to distinguish between exit and enter events. I make this a tracepoint by unchecking the suspend option. So now, we have a tracepoint that prints the name of the thread that just exited the method with. I now do the exact same thing for a line breakpoint on the first line in the method. A line breakpoint is fine since entry to the method makes sense here. I change the label and make it also into a tracepoint instead of a breakpoint. Now we look at the console. I copy the name of the thread from the first printout in the console and add a condition to reduce the noise. If there’s a race condition, there must be at least one other thread, right? So, let’s remove one thread to be sure. Going down the list, it’s obvious that multiple threads enter the code. That means there’s a risk of a race condition. Now, it means I need to read the logs and see if an enter for one thread happened before the exit of another thread. This is a bit of work, but is doable. Debugging a Deadlock Next, let’s discuss deadlocks. Here we have two threads each is waiting on a monitor held by the other thread. This is a trivial deadlock but debugging is trivial even for more complex cases. Notice the bottom two threads have a MONITOR status. This means they’re waiting on a lock and can’t continue until it’s released. Typically, you’d see this in Java as a thread is waiting on a synchronized block. You can expand these threads and see what’s going on and which monitor is held by each thread. If you’re able to reproduce a deadlock or a race in the debugger, they are both simple to fix. Asynchronous Stack Traces Stack traces are amazing in synchronous code, but what do we do when we have asynchronous callbacks? Here we have a standard async example from JetBrains that uses a list of tasks and just sends them to the executor to perform on a separate thread. Each task sleeps and prints a random number. Nothing to write home about. As far as demos go this is pretty trivial. Here’s where things get interesting. As you can see, there’s a line that separates the async stack from the current stack on the top. The IDE detected the invocation of a separate thread and kept the stack trace on the side. Then, when it needed the information, it took the stack trace from before and glued it to the bottom. The lower part of the stack trace is from the main thread and the top portion is on the executor thread. Notice that this works seamlessly with Swing, executors, Spring Async annotation, etc., very cool! Asynchronous Annotations That’s pretty cool but there’s still a big problem. How does that work and what if I have custom code? It works by saving the stack trace in places where we know an asynchronous operation is happening and then placing it later when needed. How does it connect the right traces? It uses variable values. In this demo, I created a simple listener interface. You’ll notice it has no asynchronous elements in the stack trace. By adding the async schedule and async executor annotations, I can determine the point where an async code might launch, which is the schedule marker. I can place it on a variable to indicate the variable I want to use to lookup the right stack trace. I do the same thing with execute and get custom async stack traces. I can put the annotations on a method and the current object will be used instead. Final Word In the next video, we’ll discuss memory debugging. This goes beyond what the profiler provides, the debugger can be a complimentary surgical tool you can use to pinpoint a specific problem and find out the root cause.If you have any questions, please use the comments section. Thank you!
The goto software framework for any web developer looking for an open-source, free test automation tool is Selenium. It is used with various programming languages, including Java, Python, PHP, Perl, and C#. Selenium can also be used as a web-scraping tool or to create a human-replica bot to automate social media or test PDF files. In this Python Selenium screenshot tutorial, we are going to explore different ways of taking screenshots using Selenium’s Python bindings. Before we hop into capturing Python Selenium screenshots, let’s acquaint ourselves with Selenium Python bindings. What Is Selenium Python Bindings? Selenium has different components. We have Selenium WebDriver, Selenium IDE, and Selenium Grid. Selenium Python bindings is an API interface to use Python with Selenium WebDriver for writing functional/acceptance tests. We shall be using these Python bindings for Selenium to capture full-page screenshots and HTML element-specific screenshots and save them in our desired location. Installing Dependencies Before we learn how to use Selenium Python for taking screenshots, we need to install some dependencies. Below is a list of all that you need on your machine: Python ip Selenium Python bindings GeckoDriver ChromeDriver To learn how to use Selenium Python, you must have Python and pip installed on your system or server. Python comes pre-installed with Linux and Mac systems. For Windows, you may download the Python installer from Python’s official website: Note: Python 2 is redundant now. So, if your Linux or Mac system has the older version, you may consider updating them to the latest, most stable versions. You would also need pip installed on your system. pip is a tool or a package manager tool for Python and comes pre-installed with the latest versions (as you can see in the image above). You can see if it exists on your system by running the following command in the command prompt: pip help If you get a response like the one below from pip, you are good to go: If it displays something like this: Then you have to download this get-pip.py file to any location in your system, but you should have the path to the file. Remember, you only have to do this if pip is not installed in your system. Next, run this command to install pip: python get-pip.py If you aren’t in the directory as that of the downloaded file, replace the file name in the command given above with the full path to the file location. Now, try the pip command again, you should see the screen we shared earlier. Next, we will install Selenium Python bindings using pip. Then, you will have to run the following command: pip install selenium This installs Selenium’s Python bindings in your system. Alternatively, if you don’t like this installation mechanism, you can download the Selenium-Python source distribution from Pypi and unarchive it. Once you do this, run the following command to install the bindings: python setup.py install Again, remember you only need this if you don’t want to install using pip. Also, if you are not in the same folder where you have archived the downloaded Selenium Python bindings then replace setup.py with full path to setup.py. Next, we need a driver to proceed with clicking Python Selenium screenshotsof webpages. You can choose any browser of your choice and download the drivers from the following links: Chrome: “ChromeDriver—WebDriver for Chrome” Firefox Edge: “Microsoft Edge WebDriver” Internet Explorer: “Selenium Release Storage Google APIs” Now, let’s make a trial file named check_setup.py. Write the following code in it: from selenium import webdriver browser = webdriver.Firefox() browser.get(“https://www.lambdatest.com”) This should fire-up a Firefox instance and automatically load the LambdaTest homepage. If this works for you, we’re all set to capture Python Selenium screenshots of websites. Capturing Screenshots Using Python and Selenium I hope that now you have all the dependencies installed and know how to use Selenium Python. It is time to get to the good part. In this section, we will see how to take a Python Selenium screenshot for any web page. We will see instances for both GeckoDriver and ChromeDriver. First, let’s see how to use Selenium Python with GeckoDriver or Selenium FirefoxDriver. Using get_screenshot_as_file() With GeckoDriver for Python Selenium Screenshots from selenium import webdriver from time import sleep browser = webdriver.Firefox() browser.get(“https://www.lambdatest.com/”) sleep(1) browser.get_screenshot_as_file(“LambdaTestVisibleScreen.png”) browser.quit() If you would like to store these images in a specific location other than the project directory, please specify full path as the argument to get_screenshot_as_file. Code Walkthrough Let’s understand what we are doing here: from selenium import webdriver: This line imports the WebDriver which we use to fire-up a browser instance and use APIs to interact with web elements. from time import sleep: This line imports the sleep function from Python’s ‘time’ module. This accepts integer arguments, which equals the number of seconds. The script waits for the specified number of seconds before executing the next line of code. browser = webdriver.Firefox(): This line is equivalent to saying, use the keyword ‘browser’ as you would use webdriver.Firefox(). browser.get(“https://www.lambdatest.com”) This fires-up a Firefox instance controlled by a Selenium driver and fetches the URL specified as an argument to get(argument) function. sleep(1): This halts the script from running for one second. This step is often required when there are animations on the page or when you explicitly want to wait for a while so that some actions can be performed or pages can load fully. Note: Selenium WebDriver, by default, waits for the page to load completely before executing the next line of script or operation. But in some advanced JavaScript rendered websites, we may need to use ‘sleep’ for manually pausing the script for a while so that animations and the page itself is fully loaded. browser.get_screenshot_as_file(“LambdaTestVisibleScreen.png”) The code above finally clicks the visible section of the webpage in the launched Firefox instance and saves the screenshot with the specified name and extension. browser.quit(): The browser needs to be closed and this line does the same. Using save_screenshot() With GeckoDriver for Python Selenium Screenshots This is the easiest way to save the full page screenshot. Just replace the get_screenshot_as_file command with save_screenshot, as displayed below: browser.get_screenshot_as_file(“LambdaTestVisibleScreen.png”) Will become the code as follows: driver.save_screenshot(‘your_desired_filename.png’) Next, we will see how to use Selenium Python to capture screenshots with the help of Selenium ChromeDriver. Using screenshot() With ChromeDriver for Python Selenium Screenshots The save_screenshot function works with ChromeDriver, but to propose an alternative solution, we will also show you how to use the screenshot function to take a full-page screenshot. Here is the script: from selenium import webdriver from time import sleep from selenium.webdriver import ChromeOptions options = ChromeOptions() options.headless = True browser = webdriver.Chrome(chrome_options=options) URI = “https://www.lambdatest.com” browser.get(URI) sleep(1) S = lambda X: browser.execute_script(‘return document.body.parentNode.scroll’+X) browser.set_window_size(S(‘width’), S(‘height’)) browser.find_element_by_tag_name(‘body’).screenshot(‘LambdaTestFullPage.png’) browser.quit() Code Walkthrough Let’s understand what we are doing here. First of all, in this example, we are using ChromeDriver. Earlier we used a GeckoDriver for using Firefox as a browser. More or less, the other functionalities are the same. from selenium.webdriver import ChromeOptions We import ChromeOptions to set the browser as headless so that it runs in the background. We could have directly used webdriver.ChromeOptions, but to make it more understandable, we split it into a separate line of code: options = ChromeOptions() options.headless = True browser = webdriver.Chrome(chrome_options=options) URI = “https://www.lambdatest.com” browser.get(URI) Here, we use the newly set ChromeOptions and pass it as parameter to the webdriver’s Chrome function. Observe, previously we used Firefox(). Browser.get, which launches the instance and fetches the URL. S = lambda X: browser.execute_script(‘return document.body.parentNode.scroll’+X) browser.set_window_size(S(‘width’), S(‘height’)) The first line is a lambda function to find the value of “X.” We get the value by executing DOM JavaScript functions. The second line is to resize the window: browser.find_element_by_tag_name(‘body’).screenshot(‘lambdaTestFullPage.png’) browser.quit() Finally, we track down the body element of the webpage by using the driver function find_element_by_tag_name and pass “body” as a parameter. You could also use find_element_by_id, find_element_by_xpath to locate the element.We used a ‘.’ operator nested screenshot() function in the same line to capture the full page screenshot. Lastly, we terminate the Chrome instance using browser.quit(). Capturing Python Selenium Screenshots of a Particular Element We now demonstrate how we can use the save_screenshot() function to capture any element on the page, say a button, image, or form. We shall use Python’s PIL library, which lets us perform image operations. We shall capture a feature “section” element on the LambdaTest website with following: XPath – //section[contains(string(),’START SCREENSHOT TESTING’)]. The final script would be: from selenium import webdriver from time import sleep from PIL import Image browser = webdriver.Chrome() browser.get(“https://www.lambdatest.com/feature”) sleep(1) featureElement = browser.find_element_by_xpath(“//section[contains(string(),’START SCREENSHOT TESTING’)]”) location = featureElement .location size = featureElement .size browser.save_screenshot(“fullPageScreenshot.png”) x = location[‘x’] y = location[‘y’] w = x + size[‘width’] h = y + size[‘height’] fullImg = Image.open(“fullPageScreenshot.png”) cropImg = fullImg.crop(x, y, w, h) cropImg.save(‘cropImage.png’) browser.quit() This script, when executed, would save the cropped info-element from LambdaTest website as cropImage.png. Code Walkthrough from PIL import Image This line imports the image module from the PIL library of Python. featureElement = browser.find_element_by_xpath(“//section[contains(string(),’START SCREENSHOT TESTING’)]”) This line locates one of the features on LambdaTest website using XPath (as you can see below.) location = featureElement .location size = featureElement .size The first line fetches the vertical and horizontal start location of the feature-element. The second line gets the width and height of the element. We store these in ‘x,’ ’y,’ ’w,’ and ‘h’ variables respectively. fullImg = Image.open(“fullPageScreenshot.png”) cropImg = fullImg.crop(x, y, w, h) cropImg.save(‘cropImage.png’) We first open the image and store bytes in a “fullImg” variable. Next, we crop it using the x, y, w, and h parameters we calculated. Lastly, we save the cropped image. This should be the output that you will see after the successful execution of the code: What Role Do Screenshots Play in Test Automation? Automated screenshots could help in easy identification of the bugs and is faster than doing it all manually. Most importantly, it can be as scalable as the application you are testing without requiring extra testers. A direct implication of the above is—automated screenshots are a cost-effective and a time-efficient process. Other Options To Take Python Selenium Screenshots If you would rather use other ways to capture Python Selenium screenshots, you can also use the “Selenium-Screenshot 2.0.0” library to take screenshots. For installing it, execute the following command: pip install Selenium-Screenshot Example to Capture Full-Page Screenshot from Screenshot import Screenshot_Clipping from selenium import webdriver ob=Screenshot_Clipping.Screenshot() driver = webdriver.Chrome() url = "https://www.google.com" driver.get(url) img_url=ob.full_Screenshot(driver, save_path=r'.', image_name='google.png') driver.quit() Why Selenium and Python Are Well-Suited for Capturing Screenshots Selenium and Python are the goto choices when it comes to Selenium test automation. And this is not just limited to capturing screenshots, there’s a lot more you can do using this awesome combination. Let’s find out why: The learning curve is very small for Selenium Python bindings as the language itself is very easy and intuitive to start with. We can use it with multiple browsers, including the popular ones. The number of lines of code you need to write in Python is far less than compared to other languages. Strong community support. Faster and efficient execution. Conclusion In this tutorial, we learned about using Selenium and Python to capture screenshots of web pages. This is essentially the best way to catch bugs efficiently and save your team a lot of time. The best way to perform cross browser testing is to compare how the web pages are rendered over multiple browsers or devices. You can use a cloud-based platform like LambdaTest to capture your website’s screenshots or a particular web page without going through the trouble of writing the code. You can do this on a selection of 3000+ browsers and operating systems over a Selenium Grid cloud. We hope these tools will come in handy when you are not in the mood to write all that code but just want to hunt some bugs. If you have any issues or questions don’t hesitate to reach out via the comment section. Happy testing!
As Web3 engineering grows in complexity, there is an increasing need for DevOps practices and philosophies that have proven proficiency with Web2 apps at scale, sometimes to billions of users. In this article, we’ll explore how DevOps—an engineering philosophy that facilitates fast and efficient collaboration, maintenance, and release of software—works in Web3. We’ll walk through tools that you can use as-is from traditional software engineering, and we’ll look at offerings that are specifically intended for end-to-end blockchain application development. What Is DevOps? DevOps is a software-development philosophy that emphasizes collaboration, communication, and integration between various engineering stakeholders. Used well, DevOps can drastically increase the speed and quality of software development as well as the reliability and resilience of the systems being developed. DevOps creates a unified pipeline that facilitates efficient coding and testing, continuous delivery, robust releases, monitoring, and planning. Given the many complex operations DevOps brings together, it tends to require separate tools for each component. As a result, there is an extremely mature ecosystem of DevOps tools available for Web3 engineers. There are, however, nuances associated with Web3 that don't translate from Web2. One such case is the use of a public, decentralized ledger as a backend as opposed to private, centralized databases. Fortunately, Web3-specific DevOps tooling is growing rapidly, thanks to companies like ConsenSys which provides the essential tools for every stage of development to streamline Web3 engineering. DevOps Tooling for Web3 As stated earlier, Web3 DevOps is a mixture of traditional Web2 engineering tools and offerings that cater to Web3’s unique needs. In this section, we’ll survey the tools available for each component of the DevOps pipeline. Let’s walk through some of the DevOps stages and see how they look in Web3: Build Test CI/CD Monitor Planning and Feedback Build Perhaps the most obvious and visible component, the build or development team is responsible for creating new features, improving existing features, fixing bugs, and documenting code. Blockchain engineering usually entails the creation of special programs called smart contracts that run on top of a blockchain. As a result, a unique set of tools is required for building such apps. There are many development environments available, such as Remix IDE, that facilitate this. However, the most popular and robust option available is the Truffle Suite. The Truffle Suite allows developers to work seamlessly with Web3 components such as wallets, chains, and node providers. Some of the things you can do with Truffle include: Code smart contracts using Solidity Write scripts that deploy smart contracts to a chain of your choice Allow for seamless integration of node providers like Infura Integrate other useful third-party libraries like Ethlint Other options include Brownie and Hardhat for EVM smart contracts, Anchor for Solana, and the Playground GUI for Flow. Testing Testing is a critical component of Web3 DevOps, as it helps ensure the quality and reliability of the software. This is even more critical with Web3, where you often deal with contracts that are immutable and extraordinarily expensive to deploy and maintain. There are several types of testing that may be used in a DevOps environment including unit, integration, functional, and performance testing. In the case of Web3 (and more specifically blockchain), unit testing and integration testing are perhaps the most important. The former involves testing individual units or components of the software to ensure that they are working correctly. The latter is more focused on how different components of the software work together. Developers/Ops engineers are often advised to set up a local instance of a blockchain for testing before deploying it to a public network. One of the most popular tools that allows us to do this is Ganache. Since Ganache is maintained by the same development team, it seamlessly integrates into the Truffle suite. Tests are usually written the same way as in web2, using libraries like Mocha and Chai. Truffle also has built-in support for tests. A traditional Truffle test and a test using Mocha By their very nature, smart contracts tend to be immutable and unforgiving of critical security flaws. As a result, the degree of Web3 security testing is much higher than what is expected out of a traditional Web2 project. If the contract is expected to handle a large amount of money and/or assets, it is strongly suggested that external security audits be conducted in addition to the testing mentioned above. Continuous Integration and Continuous Delivery Continuous Integration and Continuous Delivery, often abbreviated as CI/CD, involves automating the build, test, and deployment process for software updates and new features. Once the development team publishes new code to the repository, the CI component integrates the code into the codebase to ensure there are no breaking changes. Once the changes are deemed satisfactory, a new build is triggered (by the CD tool) and deployed to the various environments (typically staging and production). Although certain tests and testing environments may be different when working with Web3 dapps, Truffle works exceptionally well with traditional CI/CD tools such as Gradle, Jenkins, and CircleCI. Some of the things these tools offer out of the box include: Auto-deployment on successfully passing tests and creating a new build Live build requests Integration with a number of tools including GitHub and Assembla Audit logs As mentioned above, the installation and setup of your chosen tool is almost identical to any other software engineering project, with enterprise support available when needed. Monitoring Monitoring, like its sister components, also ensures the performance and reliability of systems. It does this by tracking the health and performance of systems, and by identifying and resolving issues in a timely manner. Key aspects of monitoring in DevOps include collecting and storing log data from systems, configuring systems to send alerts when certain conditions or thresholds are met, and dashboards that display real-time performance and health of systems. When dealing with dApps, we are interested in monitoring both the overall health of the blockchain as well as activity and incidents that are specific to our smart contracts. Leading node providers like Infura present both these macro and micro assessments via readily accessible APIs and dashboards. In the case of general blockchain, Infura provides APIs you can use to access any information relevant to that particular blockchain. Some of the Web3 information you might want to monitor includes: Current gas price Block number Balance of a particular address Compiled smart contract code at a particular address Gas fee history If you’re using Infura’s nodes, they will also give you detailed information on the health of your smart contract/dApp, how much traffic Infura is witnessing, how many users are interacting with it, etc. Planning and Feedback Planning and feedback ensure that the entire development process is aligned with business goals and priorities, and that timely progress is being made. This includes Agile planning using methodologies like Scrum and Kanban, mechanisms that facilitate continuous feedback, and metrics that track progress and identify areas for improvement. Most of the tools used for planning and feedback in traditional software engineering, such as Jira, Trello, Slack, and Datadog, can be used in Web3 development as well. For blockchain-specific metrics, you can continue to use the monitoring tools highlighted in the previous section (for example, Infura). Conclusion As Web3 and blockchain engineering grow in complexity, they require large teams to collaboratively deliver results quickly and efficiently. Therefore, the DevOps philosophy has become as integral to Web3 engineering teams as it has been for traditional software. Web3 engineers will find that existing DevOps tools work very well for their use cases, and the ecosystem of tools coming up to serve the nuances of Web3 is also growing all the time.
Twitter has been my “source of truth” for a long time. I learned a lot from the many technical people who share their work and knowledge on it. It got me in contact with Foojay.io and allowed me to share my Java writing which eventually led to my job at Azul! I also wasted too many hours scrolling, but I still believe most of them were worth it. What is happening at Twitter HQ since Musk took it over is a turning point. The way the people are treated made it great is not how a company should be led. I already had a Mastodon account for many years. It’s not a full-Twitter-alternative, but a micro-blogging system that has a lot of resemblance to Twitter. But as Twitter was still a good place, I didn’t use that account very actively. What Is Mastodon To be clear: it's not a Twitter replacement! It doesn't have all the same features or work entirely the same. But...that's actually a good thing! How Mastodon explains what they are: "Do you have an email? If you do, you already understand how Mastodon works." Main facts about Mastodon: It's an open-source GitHub project created by Eugen “Gargron” Rochko, a German programmer. No company controls and owns it. It's federated, meaning there is not one single central service. Anyone can host it and use for his/her use or open it to others to join There are no advertisements. Your timeline is not controlled by an algorithm; it's just the people you follow. When creating a Mastodon account, you must select a server to join. Some of these are suffering from growing pains and have become slower in the last few days as they need to scale up to be able to handle the new members. You can move from one server to another (or your own) anytime. The owner/administrator of a server can read all your messages. Please consider all your posts public and readable. You can edit your posts! Yes, really!!! Want to learn more? Jeroen Baert has written this lovely overview. A Mastodon Server for the Java Community When I saw many people start to move to Mastodon and the network became overloaded, it was clear we needed an instance for the Java community. I’m very thankful that Foojay provided that possibility. This happened in three steps. Is There a Need for a Java-Oriented Mastodon Server? (11/14/22) If we start a Java-specific Mastodon service, who can join it? As there is an enormous interest in Mastodon at this moment, and all these new users are looking for free and fast services, we probably need to limit the number of people joining in guaranteeing a stable performance and reasonable cost. Keeping the content safe and friendly might also need some moderators. As you understand, no decision has been made yet. How do you think we should proceed? Gathering the Reactions (11/15/22) Yes, many likes and +1, so most of the reactions think this would be a good idea. No, not everyone is convinced Twitter will disappear, and an alternative is needed. Nobody volunteered (yet) to assist (pay?) for the server and moderate Let’s Do This! (11/16/22) After thinking about it and collecting the feedback, there was only one direction forward: get a Foojay Mastodon service up and running! And thanks to toot.io, we don't need to worry about maintenance, and the bill has been covered. Update After One Month On 12/16/22, we published an update one month after foojay.social was announced. At that time, we had 118 active users who interacted more than 4000 timesWe started with a small instance to host foojay.social via toot.io, which provides us 200Gb of storage space and should be able to handle 250 active users. This means within our initial setup; we could welcome even more Java/JVM/OpenJDK members! And because we worked via a specialized hosting provider, we can quickly scale up. So... still welcome to join! Introduction Video In this video, I give a short introduction to Mastodon and how you can start using it.