DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

DZone Spotlight

Monday, September 25 View All Articles »
Build a Serverless App Fast With Zipper: Write TypeScript, Offload Everything Else

Build a Serverless App Fast With Zipper: Write TypeScript, Offload Everything Else

By John Vester CORE
I remember the first time I saw a demonstration of Ruby on Rails. With very little effort, demonstrators created a full-stack web application that could be used for real business purposes. I was impressed – especially when I thought about how much time it took me to deliver similar solutions using the Seam and Struts frameworks. Ruby was created in 1993 to be an easy-to-use scripting language that also included object-oriented features. Ruby on Rails took things to the next level in the mid 2000s – arriving at the right time to become the tech-of-choice for the initial startup efforts of Twitter, Shopify, GitHub, and Airbnb. I began to ask the question, “Is it possible to have a product, like Ruby on Rails, without needing to worry about the infrastructure or underlying data tier?” That’s when I discovered the Zipper platform. About Zipper Zipper is a platform for building web services using simple TypeScript functions. You use Zipper to create applets (not related to Java, though they share the same name), which are then built and deployed on Zipper’s platform. The coolest thing about Zipper is that it lets you focus on coding your solution using TypeScript, and you don’t need to worry about anything else. Zipper takes care of: User interface Infrastructure to host your solution Persistence layer APIs to interact with your applet Authentication Although the platform is currently in beta, it’s open for consumers to use. At the time I wrote this article, there were four templates in place to help new adopters get started: Hello World – a basic applet to get you started CRUD Template – offers a ToDo list where items can be created, viewed, updated, and deleted Slack App Template – provides an example on how to interact with the Slack service AI-Generated Code – expresses your solution in human language and lets AI create an applet for you There is also a gallery on the Zipper platform that provides applets that can be forked in the same manner as Git-based repositories. I thought I would put the Zipper platform to the test and create a ballot applet. HOA Ballot Use Case The homeowner’s association (HOA) concept started to gain momentum in the United States back in the 20th century. Subdivisions formed HOAs to handle things like the care of common areas and for establishing rules and guidelines for residents. Their goal is to maintain the subdivision’s quality of living as a whole, long after the home builder has finished development. HOAs often hold elections to allow homeowners to vote on the candidate they feel best matches their views and perspectives. In fact, last year I published an article on how an HOA ballot could be created using Web3 technologies. For this article, I wanted to take the same approach using Zipper. Ballot Requirements The requirements for the ballot applet are: As a ballot owner, I need the ability to create a list of candidates for the ballot. As a ballot owner, I need the ability to create a list of registered voters. As a voter, I need the ability to view the list of candidates. As a voter, I need the ability to cast one vote for a single candidate. As a voter, I need the ability to see a current tally of votes that have been cast for each candidate. Additionally, I thought some stretch goals would be nice too: As a ballot owner, I need the ability to clear all candidates. As a ballot owner, I need the ability to clear all voters. As a ballot owner, I need the ability to set a title for the ballot. As a ballot owner, I need the ability to set a subtitle for the ballot. Designing the Ballot Applet To start working on the Zipper platform, I navigated to Zipper's website and clicked the Sign In button. Next, I selected an authentication source: Once logged in, I used the Create Applet button from the dashboard to create a new applet: A unique name is generated, but that can be changed to better identify your use case. For now, I left all the defaults the same and pushed the Next button – which allowed me to select from four different templates for applet creation. I started with the CRUD template because it provides a solid example of how the common create, view, update, and delete flows work on the Zipper platform. Once the code was created, the screen appears as shown below: With a fully functional applet in place, we can now update the code to meet the HOA ballot use requirements. Establish Core Elements For the ballot applet, the first thing I wanted to do was update the types.ts file as shown below: TypeScript export type Candidate = { id: string; name: string; votes: number; }; export type Voter = { email: string; name: string; voted: boolean; }; I wanted to establish constant values for the ballot title and subtitle within a new file called constants.ts: TypeScript export class Constants { static readonly BALLOT_TITLE = "Sample Ballot"; static readonly BALLOT_SUBTITLE = "Sample Ballot Subtitle"; }; To allow only the ballot owner to make changes to the ballot, I used the Secrets tab for the applet to create an owner secret with the value of my email address. Then I introduced a common.ts file which contained the validateRequest() function: TypeScript export function validateRequest(context: Zipper.HandlerContext) { if (context.userInfo?.email !== Deno.env.get('owner')) { return ( <> <Markdown> {`### Error: You are not authorized to perform this action`} </Markdown> </> ); } }; This way I could pass in the context to this function to make sure only the value in the owner secret would be allowed to make changes to the ballot and voters. Establishing Candidates After understanding how the ToDo item was created in the original CRUD applet, I was able to introduce the create-candidate.ts file as shown below: TypeScript import { Candidate } from "./types.ts"; import { validateRequest } from "./common.ts"; type Input = { name: string; }; export async function handler({ name }: Input, context: Zipper.HandlerContext) { validateRequest(context); const candidates = (await Zipper.storage.get<Candidate[]>("candidates")) || []; const newCandidate: Candidate = { id: crypto.randomUUID(), name: name, votes: 0, }; candidates.push(newCandidate); await Zipper.storage.set("candidates", candidates); return newCandidate; } For this use case, we just need to provide a candidate name, but the Candidate object contains a unique ID and the number of votes received. While here, I went ahead and wrote the delete-all-candidates.ts file, which removes all candidates from the key/value data store: TypeScript import { validateRequest } from "./common.ts"; type Input = { force: boolean; }; export async function handler( { force }: Input, context: Zipper.HandlerContext ) { validateRequest(context); if (force) { await Zipper.storage.set("candidates", []); } } At this point, I used the Preview functionality to create Candidate A, Candidate B, and Candidate C: Registering Voters With the ballot ready, I needed the ability to register voters for the ballot. So I added a create-voter.ts file with the following content: TypeScript import { Voter } from "./types.ts"; import { validateRequest } from "./common.ts"; type Input = { email: string; name: string; }; export async function handler( { email, name }: Input, context: Zipper.HandlerContext ) { validateRequest(context); const voters = (await Zipper.storage.get<Voter[]>("voters")) || []; const newVoter: Voter = { email: email, name: name, voted: false, }; voters.push(newVoter); await Zipper.storage.set("voters", voters); return newVoter; } To register a voter, I decided to provide inputs for email address and name. There is also a boolean property called voted which will be used to enforce the vote-only-once rule. Like before, I went ahead and created the delete-all-voters.ts file: TypeScript import { validateRequest } from "./common.ts"; type Input = { force: boolean; }; export async function handler( { force }: Input, context: Zipper.HandlerContext ) { validateRequest(context); if (force) { await Zipper.storage.set("voters", []); } } Now that we were ready to register some voters, I registered myself as a voter for the ballot: Creating the Ballot The last thing I needed to do was establish the ballot. This involved updating the main.ts as shown below: TypeScript import { Constants } from "./constants.ts"; import { Candidate, Voter } from "./types.ts"; type Input = { email: string; }; export async function handler({ email }: Input) { const voters = (await Zipper.storage.get<Voter[]>("voters")) || []; const voter = voters.find((v) => v.email == email); const candidates = (await Zipper.storage.get<Candidate[]>("candidates")) || []; if (email && voter && candidates.length > 0) { return { candidates: candidates.map((candidate) => { return { Candidate: candidate.name, Votes: candidate.votes, actions: [ Zipper.Action.create({ actionType: "button", showAs: "refresh", path: "vote", text: `Vote for ${candidate.name}`, isDisabled: voter.voted, inputs: { candidateId: candidate.id, voterId: voter.email, }, }), ], }; }), }; } else if (!email) { <> <h4>Error:</h4> <p> You must provide a valid email address in order to vote for this ballot. </p> </>; } else if (!voter) { return ( <> <h4>Invalid Email Address:</h4> <p> The email address provided ({email}) is not authorized to vote for this ballot. </p> </> ); } else { return ( <> <h4>Ballot Not Ready:</h4> <p>No candidates have been configured for this ballot.</p> <p>Please try again later.</p> </> ); } } export const config: Zipper.HandlerConfig = { description: { title: Constants.BALLOT_TITLE, subtitle: Constants.BALLOT_SUBTITLE, }, }; I added the following validations as part of the processing logic: The email property must be included or else a “You must provide a valid email address in order to vote for this ballot” message will be displayed. The email value provided must match a registered voter or else a “The email address provided is not authorized to vote for this ballot” message will be displayed. There must be at least one candidate to vote on or else a “No candidates have been configured for this ballot” message will be displayed. If the registered voter has already voted, the voting buttons will be disabled for all candidates on the ballot. The main.ts file contains a button for each candidate, all of which call the vote.ts file, displayed below: TypeScript import { Candidate, Voter } from "./types.ts"; type Input = { candidateId: string; voterId: string; }; export async function handler({ candidateId, voterId }: Input) { const candidates = (await Zipper.storage.get<Candidate[]>("candidates")) || []; const candidate = candidates.find((c) => c.id == candidateId); const candidateIndex = candidates.findIndex(c => c.id == candidateId); const voters = (await Zipper.storage.get<Voter[]>("voters")) || []; const voter = voters.find((v) => v.email == voterId); const voterIndex = voters.findIndex(v => v.email == voterId); if (candidate && voter) { candidate.votes++; candidates[candidateIndex] = candidate; voter.voted = true; voters[voterIndex] = voter; await Zipper.storage.set("candidates", candidates); await Zipper.storage.set("voters", voters); return `${voter.name} successfully voted for ${candidate.name}`; } return `Could not vote. candidate=${ candidate }, voter=${ voter }`; } At this point, the ballot applet was ready for use. HOA Ballot In Action For each registered voter, I would send them an email with a link similar to what is listed below: https://squeeking-echoing-cricket.zipper.run/run/main.ts?email=some.email@example.com The link would be customized to provide the appropriate email address for the email query parameter. Clicking the link runs the main.ts file and passes in the email parameter, avoiding the need for the registered voter to have to type in their email address. The ballot appears as shown below: I decided to cast my vote for Candidate B. Once I pushed the button, the ballot was updated as shown: The number of votes for Candidate B increased by one, and all of the voting buttons were disabled. Success! Conclusion Looking back on the requirements for the ballot applet, I realized I was able to meet all of the criteria, including the stretch goals in about two hours—and this included having a UI, infrastructure, and deployment. The best part of this experience was that 100% of my time was focused on building my solution, and I didn’t need to spend any time dealing with infrastructure or even the persistence store. My readers may recall that I have been focused on the following mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” - J. Vester The Zipper platform adheres to my personal mission statement 100%. In fact, they have been able to take things a step further than Ruby on Rails did, because I don’t have to worry about where my service will run or what data store I will need to configure. Using the applet approach, my ballot is already deployed and ready for use. If you are interested in giving applets a try, simply login to zipper.dev and start building. Currently, using the Zipper platform is free. Give the AI-Generated Code template a try, as it is really cool to provide a paragraph of what you want to build and see how closely the resulting applet matches what you have in mind. If you want to give my ballot applet a try, it is also available to fork in the Zipper gallery. Have a really great day! More
Software Verification and Validation With Simple Examples

Software Verification and Validation With Simple Examples

By Stelios Manioudakis
Verification and validation are two distinct processes often used in various fields, including software development, engineering, and manufacturing. They are both used to ensure that the software meets its intended purpose, but they do so in different ways. Verification Verification is the process of checking whether the software meets its specifications. It answers the question: "Are we building the product right?" This means checking that the software does what it is supposed to do, according to the requirements that were defined at the start of the project. Verification is typically done by static testing, which means that the software is not actually executed. Instead, the code is reviewed, inspected, or walked through to ensure that it meets the specifications. Validation Validation is the process of checking whether the software meets the needs of its users. It answers the question: "Are we building the right product?" This means checking that the software is actually useful and meets the expectations of the people who will be using it. Validation is typically done by dynamic testing, which means that the software is actually executed and tested with real data. Here are some typical examples of verification and validation: Verification: Checking the code of a software program to make sure that it follows the correct syntax and that all of the functions are implemented correctly Validation: Testing a software program with real data to make sure that it produces the correct results Verification: Reviewing the design documents for a software system to make sure that they are complete and accurate Validation: Conducting user acceptance testing (UAT) to make sure that a software system meets the needs of its users When To Use Conventionally, verification should be done early in the software development process, while validation should be done later. This is because verification can help to identify and fix errors early on, which can save time and money in the long run. Validation is also important, but it can be done after the software is mostly complete since it involves real-world testing and feedback. Another approach would be to start verification and validation as early as possible and iterate. Small, incremental verification steps can be followed by validation whenever possible. Such iterations between verification and validation can be used throughout the development phase. The reasoning behind this approach is that both verification and validation may help to identify and fix errors early. Weather Forecasting App Imagine a team of software engineers developing a weather forecasting app. They have a specification that states, "The app should display the current temperature and a 5-day weather forecast accurately." During the testing phase, they meticulously review the code, check the algorithms, and ensure that the app indeed displays the temperature and forecast data correctly according to their specifications. If everything aligns with the specification, the app passes verification because it meets the specified criteria. Now, let's shift our focus to the users of this weather app. They download the app, start using it, and provide feedback. Some users report that while the temperature and forecasts are accurate, they find the user interface confusing and difficult to navigate. Others suggest that the app should provide more detailed hourly forecasts. This feedback pertains to the user experience and user satisfaction, rather than specific technical specifications. Verification confirms that the app meets the technical requirements related to temperature and forecast accuracy, but validation uncovers issues with the user interface and user needs. The app may pass verification but fail validation because it doesn't fully satisfy the true needs and expectations of its users. This highlights that validation focuses on whether the product meets the actual needs and expectations of the users, which may not always align with the initial technical specifications. Social Media App Let's say you are developing a new social media app. The verification process would involve ensuring that the app meets the specified requirements, such as the ability to create and share posts, send messages, and add friends. This could be done by reviewing the app's code, testing its features, and comparing it to the requirements document. The validation process would involve ensuring that the app meets the needs of the users. This could be done by conducting user interviews, surveys, and usability testing. For example, you might ask users how they would like to be able to share posts, or what features they would like to see added to the app. In this example, verification would ensure that the app is technically sound, while validation would ensure that it is user-friendly and meets the needs of the users. Online Payment Processing App A team of software engineers is developing an online payment processing app. For verification, they would verify that the code for processing payments, calculating transaction fees, and handling currency conversions has been correctly implemented according to the app's design specifications. They would also ensure that the app adheres to industry security standards, such as the Payment Card Industry Data Security Standard (PCI DSS), by verifying that encryption protocols, access controls, and authentication mechanisms are correctly integrated. They would also confirm that the user interface functions as intended, including verifying that the payment forms collect necessary information and that error messages are displayed appropriately. To validate the online payment processing software, they would use it in actual payment transactions. One case would be to process real payment transactions to confirm that the software can handle various types of payments, including credit cards, digital wallets, and international transactions, without errors. Another case would be to evaluate the user experience, checking if users can easily navigate the app, make payments, and receive confirmation without issues. Predicting Brain Activity Using fMRI A neuroinformatics software app is developed to predict brain activity based on functional magnetic resonance imaging (fMRI) data. Verification would verify that the algorithms used for preprocessing fMRI data, such as noise removal and motion correction, are correctly translated into code. You would also ensure that the user interface functions as specified, and that data input and output formats adhere to the defined standards, such as the Brain Imaging Data Structure (BIDS). Validation would compare the predicted brain activity patterns generated by the software to the actual brain activity observed in the fMRI scans. Additionally, you might compare the software's predictions to results obtained using established methods or ground truth data to evaluate its accuracy. Validation in this context ensures that the software not only runs without internal errors (as verified) but also that it reliably and accurately performs its primary function of predicting brain activity based on fMRI data. This step helps determine if the software can be trusted for scientific or clinical purposes. Predicting the Secondary Structure of RNA Molecules Imagine you are a bioinformatician working on a software tool that predicts the secondary structure of RNA molecules. Your software takes an RNA sequence as input and predicts the most likely folding pattern. For verification, you want to verify that your RNA secondary structure prediction software calculates free energy values accurately using the algorithms described in the scientific literature. You compare the software's implementation against the published algorithms and validate that the code follows the expected mathematical procedures precisely. In this context, verification ensures that your software performs the intended computations correctly and follows the algorithmic logic accurately. To validate your RNA secondary structure prediction software, you would run it on a diverse set of real-world RNA sequences with known secondary structures. You would then compare the software's predictions against experimental data or other trusted reference tools to check if it provides biologically meaningful results and if its accuracy is sufficient for its intended purpose. The Light Switch in a Conference Room Consider a light switch in a conference room. Verification asks whether the lighting meets the requirements. The requirements might state that "the lights in front of the projector screen can be controlled independently of the other lights in the room." If the requirements are written down and the lights cannot be controlled independently, then the lighting fails verification. This is because the implementation does not meet the requirements. Validation asks whether the users are satisfied with the lighting. This is a more subjective question, and it is not always easy to measure satisfaction with a single metric. For example, even if the lights can be controlled independently, the users may still be dissatisfied if the lights are too bright or too dim. Wrapping Up Verification is usually a more technical activity that uses knowledge about software artifacts, requirements, and specifications. Validation usually depends on domain knowledge, that is, knowledge of the application for which the software is written. For example, validation of medical device software requires knowledge from healthcare professionals, clinicians, and patients. It is important to note that verification and validation are not mutually exclusive. In fact, they are complementary processes. Verification ensures that the software is built correctly, while validation ensures that the software is useful. By combining verification and validation, we can be more confident that our product will make customers happy. More

Trend Report

Database Systems

This data-forward, analytics-driven world would be lost without its database and data storage solutions. As more organizations continue to transition their software to cloud-based systems, the growing demand for database innovation and enhancements has climbed to novel heights. We are upon a new era of the "Modern Database," where databases must both store data and ensure that data is prepped and primed securely for insights and analytics, integrity and quality, and microservices and cloud-based architectures.In our 2023 Database Systems Trend Report, we explore these database trends, assess current strategies and challenges, and provide forward-looking assessments of the database technologies most commonly used today. Further, readers will find insightful articles — written by several of our very own DZone Community experts — that cover hand-selected topics, including what "good" database design is, database monitoring and observability, and how to navigate the realm of cloud databases.

Database Systems

Refcard #008

Design Patterns

By Justin Albano CORE
Design Patterns

Refcard #388

Threat Modeling

By Apostolos Giannakidis
Threat Modeling

More Articles

Essential Complexity Is the Developer's Unique Selling Point
Essential Complexity Is the Developer's Unique Selling Point

In my previous post, I highlighted the difference between efficiency and effectiveness and how it maps to artificial versus human intelligence. Doing things fast and with minimum waste is the domain of deterministic algorithms. But to know when we’re building the right thing (effectiveness) is our domain. It’s a slippery and subjective challenge, tied up with the confusing reality of trying to make human existence more comfortable with the help of software. Today I want to talk about essential complexity. A fully autonomous AI programmer would need to be told exactly what we want, and why, or it should be sufficiently attuned to our values to fill in the gaps. Sadly, we cannot trust AI yet to reliably connect the dots without human help and corrections. It’s not like telling an autonomous vehicle where you want to go. That has a very simple goal – and we’re nowhere near a failsafe implementation. Essential complexity is about “debugging the specification,” figuring out what we, the people, need and why we need it. Accidental complexity is a consequence of the alternatives we choose to implement these ideas. Frederick Brooks’ enduring distinction between essential and accidental complexity is analogous to the realms of human vs. machine intelligence similar to the effectiveness/efficiency distinction of the previous post. Since fully autonomous software production by businesspersons could only work if they state exactly and unambiguously what they want, developers smugly conclude that their jobs are safe. I’m not so sure that such perfect specs are a necessary condition. I mean, they aren’t now. Who postpones coding until they have complete, unambiguous, and final specifications? Programming means fleshing out the specification in your IDE from a sufficiently clear roadmap, filling in the details as you go along. It’s not mere implementation; it’s laying the bricks for the first floor while you’re still tweaking the drawing for the roof. It seems inefficient, but it turns out we can’t imagine our dream house perfectly unless we’re halfway done building it, at least when it comes to making software. AI is already very qualified to deal with much of the accidental complexity you encounter on the way. We should use it as much as we can. I know I devoted three articles to the Java OCP 17 exam (link here for Part 1, Part 2, and Part 3), but I believe (and hope) that rote knowledge of arcane details will go the way of the dodo. AI takes care of idiomatic usage, it can enforce clean code, good naming conventions, and even write source documentation. And it will get better and better at it. It can even do full-blown migrations of legacy code to new language and framework versions. I’m all for it. Migrating a Java 4 EJB2 behemoth to Spring Boot 3 microservices by hand is not my idea of fun. If in five years’ time, the state of the art in code assistance still leaves you unimpressed while writing code, it’s probably not because of some accidental complexity the machine can’t handle. It’s most likely the essential complexity it can’t deal with. If your mortgage calculator outputs a 45.4% mortgage interest rate and the co-pilot won’t warn you that you probably confused a decimal point, it’s because it has never bought a house itself and won’t notice that the figure is an order of magnitude too steep. Essential complexity can be expressed in any medium; it needn’t be computer code. Once you know exactly how something should work, most coding challenges become easy by comparison, provided you are competent in your language of choice. So, we break down complicated domains into manageable chunks and we grow the product, improving and expanding it with each iteration. That doesn’t always work. Sometimes the essential complexity cannot be reduced, and you need a stroke of genius to make progress. Take for example asymmetric key exchange, a tantalizing problem that tormented the greatest mathematical minds for decades, if not centuries. Alice and Bob can communicate using an uncrackable encryption key, but if they don’t know that Eve has intercepted it, everything is in the open. If only we could have a pair of keys, such that you can encrypt a message with key A but can only decrypt it with key B and no practical way to deduce one key from the other. If you then give out one part of the key to everybody and protect the other part of your life, you have solved the key exchange. It's simple enough to state where you want to arrive but it’s hardly a specification from which to start coding. It’s not even a programming task. It’s the quest for inventing an algorithm that may not even be possible. In Scrum Poker you would draw the infinity card. The algorithms that Whitfield Diffie and Martin Hellman ultimately devised fit on the proverbial napkin. Translating it to code would be trivial by comparison. But they could never have arrived at the solution incrementally behind a keyboard. Or read about the fascinating story of cracking the Enigma cipher by the team at Bletchley Park. An even more daunting task because there was literally a war to win. You cannot make a masterpiece to order, nor in the art of science. If we knew what made a good song, we could replicate the process, if not using software, then at least by some formulaic method. But that doesn’t produce classics. Creativity is a hit-and-miss process and few artists consistently produce works of genius. There’s no reason why we should expect great AI advances in that department. But we can expect better tooling to get the creative juices flowing. Songwriters use a rhyming dictionary and thesaurus in search of inspiration. That’s not cheating. Fortunately, unless you’re working at a university or research institute, enterprise software development and maintenance isn’t about solving centuries-old math conundrums. However, you should ponder more deeply what we want and need, instead of learning a cool new framework or getting another AWS certificate. Uncovering the essential complexity is not just the job of business analysts in the organization. I can’t wait for next-generation tooling to help us grapple with it, because that would be a genuine copilot instead of an autopilot.

By Jasper Sprengers CORE
The Systemic Process of Debugging
The Systemic Process of Debugging

Debugging is an integral part of software development. However, as projects grow in size and complexity, the process of debugging requires more structure and collaboration. This process is probably something you already do, as this process is deeply ingrained into most teams. It's also a core part of the academic theory behind debugging. Its purpose is to prevent regressions and increase collaboration in a team environment. Without this process, any issue we fix might come back to haunt us in the future. This process helps developers work cohesively and efficiently. The Importance of Issue Tracking I'm sure we all use an issue tracker. In that sense, we should all be aligned. But do you sometimes "just fix a bug"? Without going through the issue tracker? Honestly, I do that a lot. Mostly in hobby projects but occasionally even in professional settings. Even when working alone, this can become a problem... Avoiding Parallel Work on the Same Bug When working on larger projects, it's crucial to avoid situations where multiple developers are unknowingly addressing the same issue. This can lead to wasted effort and potential conflicts in the codebase. To prevent this: Always log bugs in your issue-tracking system. Before starting work on a bug, ensure it's assigned to you and marked as active. This visibility allows the project manager and other team members to be aware, reducing the chances of overlapping work. Stay updated on other issues. By keeping an eye on the issues your teammates are tackling, you can anticipate potential areas of conflict and adjust your approach accordingly. Assuming you have a daily sync session or even a weekly session, it's important to discuss issues. This prevents collision, where a teammate can hear the description of the bug and might raise a flag. This also helps in pinpointing the root cause of the bug in some situations. An issue might be familiar, and communicating through it leaves a "paper trail." As the project grows, you will find that bugs keep coming back despite everything we do. History that was left behind in the issue tracker by teammates who are no longer on the team can be a lifesaver. Furthermore, the statistics we can derive from a properly classified issue tracker can help us pinpoint the problematic areas of the code that might need further testing and maybe refactoring. The Value of Issue Over Pull Requests We sometimes write the comments and information directly into the pull request instead of the issue tracker. This can work for some situations but isn't as ideal for the general case. Issues in a tracking system are often more accessible than pull requests or specific commits. When addressing a regression, linking the pull request to the originating issue is vital. This ensures that all discussions and decisions related to the bug are centralized and easily traceable. Communication: Issue Tracker vs. Ephemeral Channels I use Slack a lot. This is a problem; it's convenient, but it's ephemeral, and in more than one case, important information written in a Slack chat was gone. Emails aren't much of an improvement, especially in the long term. An email thread I had with a former colleague was cut short, and I had no context as to where it ended. Yes, having a conversation in the issue tracker is cumbersome and awkward, but we have a record. Why We Sometimes Avoid the Issue Tracker Developers might sometimes avoid discussing issues in the tracker because: Complex discussions: Some topics might feel too broad or intricate for the issue tracker. Fear of public criticism: No one wants to appear ignorant or criticize a colleague in a permanent record. As a result, some discussions might shift to private or ephemeral channels. However, while team cohesion and empathy are crucial, it's essential to log all relevant discussions in the issue tracker. This ensures that knowledge isn't lost, especially if a team member departs. The Role of Daily Meetings Daily meetings are invaluable for teams with multiple developers working on related tasks. These meetings provide a platform for: Sharing updates: Inform the team about your current theories and direction. Engaging in discussions: If a colleague's update sounds familiar, it's an opportunity to collaborate and avoid redundant work. However, it's essential to keep these meetings concise. Detailed discussions should transition to the issue tracker for a comprehensive record. I prefer two weekly meetings as I find it's the optimal number. The first day of the week is usually a ramp-up day. Then we have the first meeting in the morning of the second day of the week and the second meeting two days later. That reduces the load of a daily meeting while still keeping information fresh. The Role of Testing in Debugging We all use tests when developing (hopefully), but debugging theory has a special place for tests. Starting With Unit Tests A common approach to debugging is to begin by creating a unit test that reproduces the issue. However, this might not always be feasible before understanding the problem. Nevertheless, once the problem is understood, we should: Create a test before fixing the issue. This test should be part of the pull request that addresses the bug. Maintain a coverage ratio. Aim for a coverage ratio of 60% or higher per pull request to ensure that changes are adequately tested. A test acts as a safeguard against a regression. If the bug resurfaces, it will be a slightly different variant of that same bug. Unit Tests vs. Integration Tests While unit tests are fast and provide immediate feedback, they primarily prevent regressions. They might not be as effective in verifying overall quality. On the other hand, integration tests, though potentially slower, offer a comprehensive quality check. They can sometimes be the only way to reproduce certain issues. Most of the difficult bugs I ran into in my career were in the interconnect area between modules. This is an area that unit tests don't cover very well. That is why integration tests are far more important than unit tests for overall application quality. To ensure quality, focus on integration tests for coverage. Relying solely on unit test coverage can be misleading. It might lead to dead code and added complexity in the system. However, as part of the debugging process, it's very valuable to have a unit test as it's far easier to debug and much faster. Final Word A structured approach to debugging, combined with effective communication and a robust testing strategy, can significantly enhance the efficiency and quality of software development. This isn't about convenience; the process underlying debugging is like a paper trail for the debugging process. I start every debugging session by searching the issue tracker. In many cases, it yields gold that might not lead me to the issue directly but still points me in the right direction. The ability to rely on a unit test that was committed when solving a similar bug is invaluable. It gives me a leg up on resolving similar issues moving forward.

By Shai Almog CORE
Revolutionizing Software Testing
Revolutionizing Software Testing

This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report Artificial intelligence (AI) has revolutionized the realm of software testing, introducing new possibilities and efficiencies. The demand for faster, more reliable, and efficient testing processes has grown exponentially with the increasing complexity of modern applications. To address these challenges, AI has emerged as a game-changing force, revolutionizing the field of automated software testing. By leveraging AI algorithms, machine learning (ML), and advanced analytics, software testing has undergone a remarkable transformation, enabling organizations to achieve unprecedented levels of speed, accuracy, and coverage in their testing endeavors. This article delves into the profound impact of AI on automated software testing, exploring its capabilities, benefits, and the potential it holds for the future of software quality assurance. An Overview of AI in Testing This introduction aims to shed light on the role of AI in software testing, focusing on key aspects that drive its transformative impact. Figure 1: AI in testing Elastically Scale Functional, Load, and Performance Tests AI-powered testing solutions enable the effortless allocation of testing resources, ensuring optimal utilization and adaptability to varying workloads. This scalability ensures comprehensive testing coverage while maintaining efficiency. AI-Powered Predictive Bots AI-powered predictive bots are a significant advancement in software testing. Bots leverage ML algorithms to analyze historical data, patterns, and trends, enabling them to make informed predictions about potential defects or high-risk areas. By proactively identifying potential issues, predictive bots contribute to more effective and efficient testing processes. Automatic Update of Test Cases With AI algorithms monitoring the application and its changes, test cases can be dynamically updated to reflect modifications in the software. This adaptability reduces the effort required for test maintenance and ensures that the test suite remains relevant and effective over time. AI-Powered Analytics of Test Automation Data By analyzing vast amounts of testing data, AI-powered analytical tools can identify patterns, trends, and anomalies, providing valuable information to enhance testing strategies and optimize testing efforts. This data-driven approach empowers testing teams to make informed decisions and uncover hidden patterns that traditional methods might overlook. Visual Locators Visual locators, a type of AI application in software testing, focus on visual elements such as user interfaces and graphical components. AI algorithms can analyze screenshots and images, enabling accurate identification of and interaction with visual elements during automated testing. This capability enhances the reliability and accuracy of visual testing, ensuring a seamless user experience. Self-Healing Tests AI algorithms continuously monitor test execution, analyzing results and detecting failures or inconsistencies. When issues arise, self-healing mechanisms automatically attempt to resolve the problem, adjusting the test environment or configuration. This intelligent resilience minimizes disruptions and optimizes the overall testing process. What Is AI-Augmented Software Testing? AI-augmented software testing refers to the utilization of AI techniques — such as ML, natural language processing, and data analytics — to enhance and optimize the entire software testing lifecycle. It involves automating test case generation, intelligent test prioritization, anomaly detection, predictive analysis, and adaptive testing, among other tasks. By harnessing the power of AI, organizations can improve test coverage, detect defects more efficiently, reduce manual effort, and ultimately deliver high-quality software with greater speed and accuracy. Benefits of AI-Powered Automated Testing AI-powered software testing offers a plethora of benefits that revolutionize the testing landscape. One significant advantage lies in its codeless nature, thus eliminating the need to memorize intricate syntax. Embracing simplicity, it empowers users to effortlessly create testing processes through intuitive drag-and-drop interfaces. Scalability becomes a reality as the workload can be efficiently distributed among multiple workstations, ensuring efficient utilization of resources. The cost-saving aspect is remarkable as minimal human intervention is required, resulting in substantial reductions in workforce expenses. With tasks executed by intelligent bots, accuracy reaches unprecedented heights, minimizing the risk of human errors. Furthermore, this automated approach amplifies productivity, enabling testers to achieve exceptional output levels. Irrespective of the software type — be it a web-based desktop application or mobile application — the flexibility of AI-powered testing seamlessly adapts to diverse environments, revolutionizing the testing realm altogether. Figure 2: Benefits of AI for test automation Mitigating the Challenges of AI-Powered Automated Testing AI-powered automated testing has revolutionized the software testing landscape, but it is not without its challenges. One of the primary hurdles is the need for high-quality training data. AI algorithms rely heavily on diverse and representative data to perform effectively. Therefore, organizations must invest time and effort in curating comprehensive and relevant datasets that encompass various scenarios, edge cases, and potential failures. Another challenge lies in the interpretability of AI models. Understanding why and how AI algorithms make specific decisions can be critical for gaining trust and ensuring accurate results. Addressing this challenge requires implementing techniques such as explainable AI, model auditing, and transparency. Furthermore, the dynamic nature of software environments poses a challenge in maintaining AI models' relevance and accuracy. Continuous monitoring, retraining, and adaptation of AI models become crucial to keeping pace with evolving software systems. Additionally, ethical considerations, data privacy, and bias mitigation should be diligently addressed to maintain fairness and accountability in AI-powered automated testing. AI models used in testing can sometimes produce false positives (incorrectly flagging a non-defect as a defect) or false negatives (failing to identify an actual defect). Balancing precision and recall of AI models is important to minimize false results. AI models can exhibit biases and may struggle to generalize new or uncommon scenarios. Adequate training and validation of AI models are necessary to mitigate biases and ensure their effectiveness across diverse testing scenarios. Human intervention plays a critical role in designing test suites by leveraging their domain knowledge and insights. They can identify critical test cases, edge cases, and scenarios that require human intuition or creativity, while leveraging AI to handle repetitive or computationally intensive tasks. Continuous improvement would be possible by encouraging a feedback loop between human testers and AI systems. Human experts can provide feedback on the accuracy and relevance of AI-generated test cases or predictions, helping improve the performance and adaptability of AI models. Human testers should play a role in the verification and validation of AI models, ensuring that they align with the intended objectives and requirements. They can evaluate the effectiveness, robustness, and limitations of AI models in specific testing contexts. AI-Driven Testing Approaches AI-driven testing approaches have ushered in a new era in software quality assurance, revolutionizing traditional testing methodologies. By harnessing the power of artificial intelligence, these innovative approaches optimize and enhance various aspects of testing, including test coverage, efficiency, accuracy, and adaptability. This section explores the key AI-driven testing approaches, including differential testing, visual testing, declarative testing, and self-healing automation. These techniques leverage AI algorithms and advanced analytics to elevate the effectiveness and efficiency of software testing, ensuring higher-quality applications that meet the demands of the rapidly evolving digital landscape: Differential testing assesses discrepancies between application versions and builds, categorizes the variances, and utilizes feedback to enhance the classification process through continuous learning. Visual testing utilizes image-based learning and screen comparisons to assess the visual aspects and user experience of an application, thereby ensuring the integrity of its look and feel. Declarative testing expresses the intention of a test using a natural or domain-specific language, allowing the system to autonomously determine the most appropriate approach to execute the test. Self-healing automation automatically rectifies element selection in tests when there are modifications to the user interface (UI), ensuring the continuity of reliable test execution. Key Considerations for Harnessing AI for Software Testing Many contemporary test automation tools infused with AI provide support for open-source test automation frameworks such as Selenium and Appium. AI-powered automated software testing encompasses essential features such as auto-code generation and the integration of exploratory testing techniques. Open-Source AI Tools To Test Software When selecting an open-source testing tool, it is essential to consider several factors. Firstly, it is crucial to verify that the tool is actively maintained and supported. Additionally, it is critical to assess whether the tool aligns with the skill set of the team. Furthermore, it is important to evaluate the features, benefits, and challenges presented by the tool to ensure they are in line with your specific testing requirements and organizational objectives. A few popular open-source options include, but are not limited to: Carina – AI-driven, free forever, scriptless approach to automate functional, performance, visual, and compatibility tests TestProject – Offered the industry's first free Appium AI tools in 2021, expanding upon the AI tools for Selenium that they had previously introduced in 2020 for self-healing technology Cerberus Testing – A low-code and scalable test automation solution that offers a self-healing feature called Erratum and has a forever-free plan Designing Automated Tests With AI and Self-Testing AI has made significant strides in transforming the landscape of automated testing, offering a range of techniques and applications that revolutionize software quality assurance. Some of the prominent techniques and algorithms are provided in the tables below, along with the purposes they serve: KEY TECHNIQUES AND APPLICATIONS OF AI IN AUTOMATED TESTING Key Technique Applications Machine learning Analyze large volumes of testing data, identify patterns, and make predictions for test optimization, anomaly detection, and test case generation Natural language processing Facilitate the creation of intelligent chatbots, voice-based testing interfaces, and natural language test case generation Computer vision Analyze image and visual data in areas such as visual testing, UI testing, and defect detection Reinforcement learning Optimize test execution strategies, generate adaptive test scripts, and dynamically adjust test scenarios based on feedback from the system under test Table 1 KEY ALGORITHMS USED FOR AI-POWERED AUTOMATED TESTING Algorithm Purpose Applications Clustering algorithms Segmentation k-means and hierarchical clustering are used to group similar test cases, identify patterns, and detect anomalies Sequence generation models: recurrent neural networks or transformers Text classification and sequence prediction Trained to generate sequences such as test scripts or sequences of user interactions for log analysis Bayesian networks Dependencies and relationships between variables Test coverage analysis, defect prediction, and risk assessment Convolutional neural networks Image analysis Visual testing Evolutionary algorithms: genetic algorithms Natural selection Optimize test case generation, test suite prioritization, and test execution strategies by applying genetic operators like mutation and crossover on existing test cases to create new variants, which are then evaluated based on fitness criteria Decision trees, fandom forests, support vector machines, and neural networks Classification Classification of software components Variational autoencoders and generative adversarial networks Generative AI Used to generate new test cases that cover different scenarios or edge cases by test data generation, creating synthetic data that resembles real-world scenarios Table 2 Real-World Examples of AI-Powered Automated Testing AI-powered visual testing platforms perform automated visual validation of web and mobile applications. They use computer vision algorithms to compare screenshots and identify visual discrepancies, enabling efficient visual testing across multiple platforms and devices. NLP and ML are combined to generate test cases from plain English descriptions. They automatically execute these test cases, detect bugs, and provide actionable insights to improve software quality. Self-healing capabilities are also provided by automatically adapting test cases to changes in the application's UI, improving test maintenance efficiency. Quantum AI-Powered Automated Testing: The Road Ahead The future of quantum AI-powered automated software testing holds great potential for transforming the way testing is conducted. Figure 3: Transition of automated testing from AI to Quantum AI Quantum computing's ability to handle complex optimization problems can significantly improve test case generation, test suite optimization, and resource allocation in automated testing. Quantum ML algorithms can enable more sophisticated and accurate models for anomaly detection, regression testing, and predictive analytics. Quantum computing's ability to perform parallel computations can greatly accelerate the execution of complex test scenarios and large-scale test suites. Quantum algorithms can help enhance security testing by efficiently simulating and analyzing cryptographic algorithms and protocols. Quantum simulation capabilities can be leveraged to model and simulate complex systems, enabling more realistic and comprehensive testing of software applications in various domains, such as finance, healthcare, and transportation. Parting Thoughts AI has significantly revolutionized the traditional landscape of testing, enhancing the effectiveness, efficiency, and reliability of software quality assurance processes. AI-driven techniques such as ML, anomaly detection, NLP, and intelligent test prioritization have enabled organizations to achieve higher test coverage, early defect detection, streamlined test script creation, and adaptive test maintenance. The integration of AI in automated testing not only accelerates the testing process but also improves overall software quality, leading to enhanced customer satisfaction and reduced time to market. As AI continues to evolve and mature, it holds immense potential for further advancements in automated testing, paving the way for a future where AI-driven approaches become the norm in ensuring the delivery of robust, high-quality software applications. Embracing the power of AI in automated testing is not only a strategic imperative but also a competitive advantage for organizations looking to thrive in today's rapidly evolving technological landscape. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report

By Tuhin Chattopadhyay CORE
Deploy a Session Recording Solution Using Ansible and Audit Your Bastion Host
Deploy a Session Recording Solution Using Ansible and Audit Your Bastion Host

Learn how to record SSH sessions on a Red Hat Enterprise Linux VSI in a Private VPC network using in-built packages. The VPC private network is provisioned through Terraform and the RHEL packages are installed using Ansible automation. What Is Session Recording and Why Is It Required? As noted in "Securely record SSH sessions on RHEL in a private VPC network," a Bastion host and a jump server are both security mechanisms used in network and server environments to control and enhance security when connecting to remote systems. They serve similar purposes but have some differences in their implementation and use cases. The Bastion host is placed in front of the private network to take SSH requests from public traffic and pass the request to the downstream machine. Bastion hosts and jump servers are vulnerable to intrusion as they are exposed to public traffic. Session recording helps an administrator of a system to audit user SSH sessions and comply with regulatory requirements. In the event of a security breach, you as an administrator would like to audit and analyze the user sessions. This is critical for a security-sensitive system. Before deploying the session recording solution, you need to provision a private VPC network following the instructions in the article, "Architecting a Completely Private VPC Network and Automating the Deployment." Alternatively, if you are planning to use your own VPC infrastructure, you need to attach a floating IP to the virtual server instance and a public gateway to each of the subnets. Additionally, you need to allow network traffic from public internet access. Deploy Session Recording Using Ansible To be able to deploy the Session Recording solution you need to have the following packages installed on the RHEL VSI: tlog SSSD cockpit-session-recording The packages will be installed through Ansible automation on all the VSIs both bastion hosts and RHEL VSI. If you haven't done so yet, clone the GitHub repository and move to the Ansible folder. Shell git clone https://github.com/VidyasagarMSC/private-vpc-network cd ansible Create hosts.ini from the template file. Shell cp hosts_template.ini hosts.ini Update the hosts.ini entries as per your VPC IP addresses. Plain Text [bastions] 10.10.0.13 10.10.65.13 [servers] 10.10.128.13 [bastions:vars] ansible_port=22 ansible_user=root ansible_ssh_private_key_file=/Users/vmac/.ssh/ssh_vpc packages="['tlog','cockpit-session-recording','systemd-journal-remote']" [servers:vars] ansible_port=22 ansible_user=root ansible_ssh_private_key_file=/Users/vmac/.ssh/ssh_vpc ansible_ssh_common_args='-J root@10.10.0.13' packages="['tlog','cockpit-session-recording','systemd-journal-remote']" Run the Ansible playbook to install the packages from an IBM Cloud private mirror/repository. Shell ansible-playbook main_playbook.yml -i hosts.ini --flush-cache Running Ansible playbooks You can see in the image that after you SSH into the RHEL machine now, you will see a note saying that the current session is being recorded. Check the Session Recordings, Logs, and Reports If you closely observe the messages post SSH, you will see a URL to the web console that can be accessed using the machine name or private IP over port 9090. To allow traffic on port 9090, in the Terraform code, Change the value of the allow_port_9090 variable to true and run terraform apply. The latest terraform apply will add ACL and security group rules to allow traffic on port 9090. Now, open a browser and navigate to http://10.10.128.13:9090 . To access using the VSI name, you need to set up a private DNS (out of scope for this article). You need a root password to access the web console. RHEL web console Navigate to session recording to see the list of session recordings. Along with session recordings, you can check the logs, diagnostic reports, etc. Session recording on the Web console Recommended Reading How to use Schematics - Terraform UI to provision the cloud resources Automation, Ansible, AI

By Vidyasagar (Sarath Chandra) Machupalli CORE
Four Ways for Developers To Limit Liability as Software Liability Laws Seem Poised for Change
Four Ways for Developers To Limit Liability as Software Liability Laws Seem Poised for Change

For many years, the idea of liability for defects in software code fell into a gray area. You can find debate about the topic going back and forth since at least the early 1990s. Throughout, software developers argued that they shouldn't be held liable for coding flaws that are both difficult to detect and sometimes even harder to fix. And in any case, knowingly exploiting software defects for nefarious purposes is a crime already — so shouldn't cyber criminals alone bear the responsibility for their actions? As a result of these arguments, there haven't been any serious attempts to pass legislation making developers liable for flaws in their code. And for even more ironclad protection, most software developers also include liability waivers in their EULAs. However, there's reason to believe that the winds surrounding this issue are beginning to shift. As a result of high-level policy reviews originating in the White House, multiple federal agencies, including the NSA, FBI, and CISA, are now calling for developers to develop workflows that make their software products secure by design and by default. And if that's the stance the US's top law enforcement agencies are going to take from now on, it's reasonable to assume that some kind of regulatory or statutory changes to that effect may soon follow. The mere suggestion of such changes, however, should be enough to spur the most sensible software developers into taking action. The good news is that it shouldn't be difficult for most to cover their bases with respect to software liability. To get them started, here are four steps developers can take right now to guard against potential changes to software liability laws. Build Security Checks Into Software Pipelines The first thing to do to head off software liability concerns is to create security checkpoints throughout your development processes. This should begin with thorough code review and certification processes for all repurposed code. This is essential since most software developers now rely on reused open-source code or on self-created code libraries that speed up the development of new software. But as the recent Log4j security incident demonstrated, reused code can introduce vulnerabilities into new software that come back to haunt you. So, it's a good idea to formalize a process whereby you can attest to the security of all reused code — both internally developed and otherwise — before it makes it into production software. Then, it's a good idea to make extensive use of source code security analyzers throughout the development process. This can make it more likely to identify security issues early on while they're still relatively easy to fix. It can also serve as evidence of your efforts at secure code development, should questions arise after a finished product release. Commit to 3rd-Party Pre-Release Code Security Audits The next thing to do to guard against potential software liability concerns is to commit to having all software products go through a 3rd-party code security audit prior to release. This provides yet another opportunity to fix security flaws before they can cause liability issues. Plus, it guarantees that an expert, neutral set of eyes goes over your work before it ships. That can help to combat the kind of vulnerability blindness that often afflicts development teams who become accustomed to reviewing their own work. It can also reassure customers that no efforts were spared to put their security front and center in the development process. Provide Adequate Customer Support In recognition of the fact that no software is ever bulletproof, it's a good idea to develop a plan to support customers in the event they're affected by a security issue in one of your software products. For business customers, this might extend to providing direct and/or contracted technical support to aid recovery efforts after a security incident. Of course, you can only take such efforts so far due to the costs involved, but being there for an affected customer could head off any attempts to hold you liable. For individual customers, it's also smart to partner with an identity protection firm to have them ready to assist if your software suffers a security issue that might lead to identity theft. According to Hari Ravichandran, CEO of Aura, "Competent identity fraud prevention can be an inexpensive way to prevent users from suffering financial losses in the first place. It's a great way to limit liability in the aftermath of a software or data breach." Purchase the Right Liability Insurance Even if you go to great lengths to find and fix every bug and potential security flaw, it's always possible for something to go overlooked. As a result, it's a good idea to come up with a liability risk management plan that includes proper insurance coverage. This is essential because, although there aren't any specific laws or regulations making software developers liable for losses stemming from the use of their products, general product liability laws can and do get applied to developers on occasion. The good news is that the preceding secure code development practices should go a long way to proving good faith in any potential liability case. However, there's no way to eliminate the risk of losses stemming from even a frivolous lawsuit. Therefore, to protect yourself, it's a good idea to carry errors and omissions (E&O) insurance and a general product liability insurance policy at the very least. The Takeaway The bottom line is that it's just a good idea for software developers to get their code security ducks in a row now to prepare for any coming shifts in software liability law. Plus, they should develop plans to deal with the aftermath of a security event involving their software now. That way, whether there's any change to the existing liability status quo or not, they'll be ready to handle any potential outcome.

By Philip Piletic CORE
AI for Web Devs: Project Introduction and Setup
AI for Web Devs: Project Introduction and Setup

If you’re anything like me, you’ve noticed the massive boom in AI technology. It promises to disrupt not just software engineering but every industry. THEY’RE COMING FOR US!!! Just kidding ;P I’ve been bettering my understanding of what these tools are and how they work, and decided to create a tutorial series for web developers to learn how to incorporate AI technology into web apps. In this series, we’ll learn how to integrate OpenAI‘s AI services into an application built with Qwik, a JavaScript framework focused on the concept of resumability (this will be relevant to understand later). Here’s what the series outline looks like: Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying We’ll get into the specifics of OpenAI and Qwik where it makes sense, but I will mostly focus on general-purpose knowledge, tooling, and implementations that should apply to whatever framework or toolchain you are using. We’ll be working as closely to fundamentals as we can, and I’ll point out which parts are unique to this app. Here’s a little sneak preview. I thought it would be cool to build an app that takes two opponents and uses AI to determine who would win in a hypothetical fight. It provides some explanation and the option to create an AI-generated image. Sometimes the results come out a little wonky, but that’s what makes it fun. I hope you’re excited to get started because in this first post, we are mostly going to work on… Boilerplate :/ Prerequisites Before we start building anything, we have to cover a couple of prerequisites. Qwik is a JavaScript framework, so we will have to have Node.js (and NPM) installed. You can download the most recent version, but anything above version v16.8 should work. I’ll be using version 20. Next, we’ll also need an OpenAI account to have access to their API. At the end of the series, we will deploy our applications to a VPS (Virtual Private Server). The steps we follow should be the same regardless of what provider you choose. I’ll be using Akamai’s cloud computing services (formerly Linode). Setting Up the Qwik App Assuming we have the prerequisites out of the way, we can open a command line terminal and run the command: npm create qwik@latest. This will run the Qwik CLI that will help us bootstrap our application. It will ask you a series of configuration questions, and then generate the project for you. Here’s what my answers looked like: If everything works, open up the project and start exploring. Inside the project folder, you’ll notice some important files and folders: /src: Contains all application business logic /src/components: Contains reusable components to build our app with /src/routes: Responsible for Qwik’s file-based routing; Each folder represents a route (can be a page or API endpoint). To make a page, drop a index.{jsx|tsx} file in the route’s folder. /src/root.tsx: This file exports the root component responsible for generating the HTML document root. Start Development Qwik uses Vite as a bundler, which is convenient because Vite has a built-in development server. It supports running our application locally, and updating the browser when files change. To start the development server, we can open our project in a terminal and execute the command npm run dev. With the dev server running, you can open the browser and head to http://localhost:5173 and you should see a very basic app. Any time we make changes to our app, we should see those changes reflected almost immediately in the browser. Add Styling This project won’t focus too much on styling, so this section is totally optional if you want to do your own thing. To keep things simple, I’ll use Tailwind. The Qwik CLI makes it easy to add the necessary changes, by executing the terminal command, npm run qwik add. This will prompt you with several available Qwik plugins to choose from. You can use your arrow keys to move down to the Tailwind plugin and press Enter. Then it will show you the changes it will make to your codebase and ask for confirmation. As long as it looks good, you can hit Enter, once again. For my projects, I also like to have a consistent theme, so I keep a file in my GitHub to copy and paste styles from. Obviously, if you want your own theme, you can ignore this step, but if you want your project to look as amazing as mine, copy the styles from this file on GitHub into the /src/global.css file. You can replace the old styles, but leave the Tailwind directives in place. Prepare Homepage The last thing we’ll do today to get the project to a good starting point is make some changes to the homepage. This means making changes to /src/routes/index.tsx. By default, this file starts out with some very basic text and an example for modifying the HTML <head> by exporting a head variable. The changes I want to make include: Removing the head export Removing all text except the <h1>; Feel free to add your own page title text. Adding some Tailwind classes to center the content and make the <h1> larger Wrapping the content with a <main> tag to make it more semantic Adding Tailwind classes to the <main> tag to add some padding and center the contents These are all minor changes that aren’t strictly necessary, but I think they will provide a nice starting point for building out our app in the next post. Here’s what the file looks like after my changes. import { component$ } from "@builder.io/qwik"; export default component$(() => { return ( <main class="max-w-4xl mx-auto p-4"> <h1 class="text-6xl">Hi [wave emoji]</h1> </main> ); }); And in the browser, it looks like this: Conclusion That’s all we’ll cover today. Again, this post was mostly focused on getting the boilerplate stuff out of the way so that the next post can be dedicated to integrating OpenAI’s API into our project. With that in mind, I encourage you to take a moment to think about some AI app ideas that you might want to build. There will be a lot of flexibility for you to put your own spin on things. I’m excited to see what you come up with, and if you would like to explore the code in more detail, I’ll post it on my GitHub account.

By Austin Gil CORE
The Convergence of Testing and Observability
The Convergence of Testing and Observability

This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report One of the core capabilities that has seen increased interest in the DevOps community is observability. Observability improves monitoring in several vital ways, making it easier and faster to understand business flows and allowing for enhanced issue resolution. Furthermore, observability goes beyond an operations capability and can be used for testing and quality assurance. Testing has traditionally faced the challenge of identifying the appropriate testing scope. "How much testing is enough?" and "What should we test?" are questions each testing executive asks, and the answers have been elusive. There are fewer arguments about testing new functionality; while not trivial, you know the functionality you built in new features and hence can derive the proper testing scope from your understanding of the functional scope. But what else should you test? What is a comprehensive general regression testing suite, and what previous functionality will be impacted by the new functionality you have developed and will release? Observability can help us with this as well as the unavoidable defect investigation. But before we get to this, let's take a closer look at observability. What Is Observability? Observability is not monitoring with a different name. Monitoring is usually limited to observing a specific aspect of a resource, like disk space or memory of a compute instance. Monitoring one specific characteristic can be helpful in an operations context, but it usually only detects a subset of what is concerning. All monitoring can show is that the system looks okay, but users can still be experiencing significant outages. Observability aims to make us see the state of the system by making data flows "observable." This means that we can identify when something starts to behave out of order and requires our attention. Observability combines logs, metrics, and traces from infrastructure and applications to gain insights. Ideally, it organizes these around workflows instead of system resources and, as such, creates a functional view of the system in use. Done correctly, it lets you see what functionality is being executed and how frequently, and it enables you to identify performance characteristics of the system and workflow. Figure 1: Observability combines metrics, logs, and traces for insights One benefit of observability is that it shows you the actual system. It is not biased by what the designers, architects, and engineers think should happen in production. It shows the unbiased flow of data. The users, over time (and sometimes from the very first day), find ways to use the system quite differently from what was designed. Observability makes such changes in behavior visible. Observability is incredibly powerful in debugging system issues as it allows us to navigate the system to see where problems occur. Observability requires a dedicated setup and some contextual knowledge similar to traceability. Traceability is the ability to follow a system transaction over time through all the different components of our application and infrastructure architecture, which means you have to have common information like an ID that enables this. OpenTelemetry is an open standard that can be used and provides useful guidance on how to set this up. Observability makes identifying production issues a lot easier. And we can use observability for our benefit in testing, too. Observability of Testing: How to Look Left Two aspects of observability make it useful in the testing context: Its ability to make the actual system usage observable and its usefulness in finding problem areas during debugging. Understanding the actual system behavior is most directly useful during performance testing. Performance testing is the pinnacle of testing since it tries to achieve as close to the realistic peak behavior of a system as possible. Unfortunately, performance testing scenarios are often based on human knowledge of the system instead of objective information. For example, performance testing might be based on the prediction of 10,000 customer interactions per hour during a sales campaign based on the information of the sales manager. Observability information can help define the testing scenarios by using the information to look for the times the system was under the most stress in production and then simulate similar situations in the performance test environment. We can use a system signature to compare behaviors. A system signature in the context of observability is the set of values for logs, metrics, and traces during a specific period. Take, for example, a marketing promotion for new customers. The signature of the system should change during that period to show more new account creations with its associated functionality and the related infrastructure showing up as being more "busy." If the signature does not change during the promotion, we would predict that we also don't see the business metrics move (e.g., user sign-ups). In this example, the business metrics and the signature can be easily matched. Figure 2: A system behaving differently in test, which shows up in the system signature In many other cases, this is not true. Imagine an example where we change the recommendation engine to use our warehouse data going forward. We expect the system signature to show increased data flows between the recommendation engine and our warehouse system. You can see how system signatures and the changes of the system signature can be useful for testing; any differences in signature between production and the testing systems should be explainable by the intended changes of the upcoming release. Otherwise, investigation is required. In the same way, information from the production observability system can be used to define a regression suite that reflects the functionality most frequently used in production. Observability can give you information about the workflows still actively in use and which workflows have stopped being relevant. This information can optimize your regression suite both from a maintenance perspective and, more importantly, from a risk perspective, making sure that core functionality, as experienced by the user, remains in a working state. Implementing observability in your test environments means you can use the power of observability for both production issues and your testing defects. It removes the need for debugging modes to some degree and relies upon the same system capability as production. This way, observability becomes how you work across both dev and ops, which helps break down silos. Observability for Test Insights: Looking Right In the previous section, we looked at using observability by looking left or backward, ensuring we have kept everything intact. Similarly, we can use observability to help us predict the success of the features we deliver. Think about a new feature you are developing. During the test cycles, we see how this new feature changes the workflows, which shows up in our observability solution. We can see the new features being used and other features changing in usage as a result. The signature of our application has changed when we consider the logs, traces, and metrics of our system in test. Once we go live, we predict that the signature of the production system will change in a very similar way. If that happens, we will be happy. But what if the signature of the production system does not change as predicted? Let's take an example: We created a new feature that leverages information from previous bookings to better serve our customers by allocating similar seats and menu options. During testing, we tested the new feature with our test data set, and we see an increase in accessing the bookings database while the customer booking is being collated. Once we go live, we realize that the workflows are not utilizing the customer booking database, and we leverage the information from our observability tooling to investigate. We have found a case where the users are not using our new features or are not using the features in the expected way. In either case, this information allows us to investigate further to see whether more change management is required for the users or whether our feature is just not solving the problem in the way we wanted it to. Another way to use observability is to evaluate the performance of your changes in test and the impact on the system signature — comparing this afterwards with the production system signature can give valuable insights and prevent overall performance degradation. Our testing efforts (and the associated predictions) have now become a valuable tool for the business to evaluate the success of a feature, which elevates testing to become a business tool and a real value investment. Figure 3: Using observability in test by looking left and looking right Conclusion While the popularity of observability is a somewhat recent development, it is exciting to see what benefits it can bring to testing. It will create objectiveness for defining testing efforts and results by evaluating them against the actual system behavior in production. It also provides value to developer, tester, and business communities, which makes it a valuable tool for breaking down barriers. Using the same practices and tools across communities drives a common culture — after all, culture is nothing but repeated behaviors. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report

By Mirco Hering
Automated Testing: The Missing Piece of Your CI/CD Puzzle
Automated Testing: The Missing Piece of Your CI/CD Puzzle

This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report DevOps and CI/CD pipelines help scale application delivery drastically — with some organizations reporting over 208 times more frequent code deployments. However, with such frequent deployments, the stability and reliability of the software releases often become a challenge. This is where automated testing comes into play. Automated testing acts as a cornerstone in supporting efficient CI/CD workflows. It helps organizations accelerate applications into production and optimize resource efficiency by following a fundamental growth principle: build fast, fail fast. This article will cover the importance of automated testing, some key adoption techniques, and best practices for automated testing. The Importance of Automated Testing in CI/CD Manual tests are prone to human errors such as incorrect inputs, misclicks, etc. They often do not cover a broad range of scenarios and edge cases compared to automated testing. These limitations make automated testing very important to the CI/CD pipeline. Automated testing directly helps the CI/CD pipeline through faster feedback cycles to developers, testing in various environments simultaneously, and more. Let's look at the specific ways in which it adds value to the CI/CD pipeline. Validate Quality of Releases Releasing a new feature is difficult and often very time-consuming. Automated testing helps maintain the quality of software releases, even on a tight delivery timeline. For example, automated smoke tests ensure new features work as expected. Similarly, automated regression tests check that the new release does not break any existing functionality. Therefore, development teams can have confidence in the release's reliability, quality, and performance with automated tests in the CI/CD pipeline. This is especially useful in organizations with multiple daily deployments or an extensive microservices architecture. Identify Bugs Early Another major advantage of automated testing in CI/CD is its ability to identify bugs early in the development cycle. Shifting testing activities earlier in the process (i.e., shift-left testing) can detect and resolve potential issues during the non-development phases. For example, instead of deploying a unit of code to a testing server and waiting for testers to find the bugs, you can add many unit tests in the test suite. This will allow developers to identify and fix issues on their local systems, such as data handling or compatibility with third-party services in the proof of concept (PoC) phase. Figure 1: Shift-left testing technique Faster Time to Market Automated testing can help reduce IT costs and ensure faster time to market, giving companies a competitive edge. With automated testing, the developer receives rapid feedback instantly. Thus, organizations can catch defects early in the development cycle and reduce the inherent cost of fixing them. Ease of Handling Changes Minor changes and updates are common as software development progresses. For example, there could be urgent changes based on customer feedback on a feature, or an issue in a dependency package, etc. With automated tests in place, developers receive quick feedback on all their code changes. All changes can be validated quickly, making sure that new functionalities do not introduce unintended consequences or regressions. Promote Collaboration Across Teams Automated testing promotes collaboration among development, testing, and operations teams through DevTestOps. The DevTestOps approach involves ongoing testing, integration, and deployment. As you see in Figure 2, the software is tested throughout the development cycle to proactively reduce the number of bugs and inefficiencies at later stages. Using automated testing allows teams to be on the same page regarding the expected output. Teams can communicate and align their understanding of the software requirements and expected behavior with a shared set of automated tests. Figure 2: DevTestOps approach Maintain Software Consistency Automated testing also contributes to maintaining consistency and agility throughout the CI/CD pipeline. Teams can confirm that software behaves consistently by generating and comparing multiple test results across different environments and configurations. This consistency is essential in achieving predictable outcomes and avoiding deployment issues. Adoption Techniques Adopting automated testing in a CI/CD pipeline requires a systematic approach to add automated tests at each stage of the development and deployment processes. Let's look at some techniques that developers, testers, and DevOps can follow to make the entire process seamless. Figure 3: Automated testing techniques in the CI/CD process Version Control for Test Data Using version control for your test assets helps synchronize tests with code changes, leading to collaboration among developers, testers, and other stakeholders. Organizations can effectively manage test scripts, test data, and other testing artifacts with a version control system, such as Git, for test assets. For example, a team can use centralized repositories to keep all test data in sync instead of manually sharing Java test cases between different teams. Using version control for your test data also allows for quick database backups if anything goes wrong during testing. Test data management involves strategies for handling test data, such as data seeding, database snapshots, or test data generation. Managing test data effectively ensures automated tests are performed with various scenarios and edge cases. Test-Driven Development Test-driven development (TDD) is an output-driven development approach where tests are written before the actual code, which guides the development process. As developers commit code changes, the CI/CD system automatically triggers the test suite to check that the changes adhere to the predefined requirements. This integration facilitates continuous testing, and allows developers to get instant feedback on the quality of their code changes. TDD also encourages the continuous expansion of the automated test suite, and hence, greater test coverage. Implement Continuous Testing By implementing continuous testing, automated tests can be triggered when code is changed, a pull request (PR) is created, a build is generated, or before a PR is merged within the CI/CD pipeline. This approach helps reduce the risk of regression issues, and ensures that software is always in a releasable state. With continuous testing integration, automated tests are seamlessly integrated into the development and release process, providing higher test coverage and early verification of non-functional requirements. Use Industry Standard Test Automation Frameworks Test automation frameworks are crucial to managing test cases, generating comprehensive reports, and seamlessly integrating with CI/CD tools. These frameworks provide a structured approach to organizing test scripts, reducing redundancy, and improving maintainability. Test automation frameworks offer built-in features for test case management, data-driven testing, and modular test design, which empower development teams to streamline their testing efforts. Example open-source test automation frameworks include — but are not limited to — SpecFlow and Maven. Low-Code Test Automation Frameworks Low-code test automation platforms allow testers to create automated tests with minimal coding by using visual interfaces and pre-built components. These platforms enable faster test script creation and maintenance, making test automation more accessible to non-technical team members. A few popular open-source low-code test automation tools include: Robot Framework Taurus Best Practices for Automated Testing As your automated test suite and test coverage grow, it's important to manage your test data and methods efficiently. Let's look at some battle-tested best practices to make your automated testing integration journey simpler. Parallel vs. Isolated Testing When implementing automated testing in CI/CD, deciding whether to execute tests in isolation or parallel is important. Isolated tests run independently and are ideal for unit tests, while parallel execution is great for higher-level tests such as integration and end-to-end tests. Prioritize tests based on their criticality and the time required for execution. To optimize testing time and accelerate feedback, consider parallelizing test execution. Developers can also significantly reduce the overall test execution time by running multiple tests simultaneously across different environments or devices. However, make sure to double-check that the infrastructure and test environment can handle the increased load to avoid any resource constraints that may impact test accuracy. DECISION MATRIX FOR ISOLATED vs. PARALLEL TESTING Factor Isolated Tests Parallel Tests Test execution time Slower execution time Faster execution time Test dependencies Minimal dependencies Complex dependencies Resources Limited resources Abundant resources Environment capacity Limited capacity High capacity Number of test cases Few test cases Many test cases Scalability Scalable Not easily scalable Resource utilization efficiency High Low Impact on CI/CD pipeline performance Minimal Potential bottleneck Testing budget Limited Sufficient Table 1 One-Click Migration Consider implementing a one-click migration feature in the CI/CD pipeline to test your application under different scenarios. Below is how you can migrate automated test scripts, configurations, and test data between different environments or testing platforms: Store your automated test scripts and configurations in version control. Create a containerized test environment. Create a build automation script to automate building the Docker image with the latest version of test scripts and all other dependencies. Configure your CI/CD tool (e.g., Jenkins, GitLab CI/CD, CircleCI) to trigger the automation script when changes are committed to the version control system. Define a deployment pipeline in your CI/CD tool that uses the Docker image to deploy the automated tests to the target environment. Finally, to achieve one-click migration, create a single button or command in your CI/CD tool's dashboard that initiates the deployment and execution of the automated tests. Use Various Testing Methods The next tip is to include various testing methods in your automated testing suite. Apart from traditional unit tests, you can incorporate smoke tests to quickly verify critical functionalities and regression tests to check that new code changes do not introduce regressions. Other testing types, such as performance testing, API testing, and security testing, can be integrated into the CI/CD pipeline to address specific quality concerns. In Table 2, see a comparison of five test types. COMPARISON OF VARIOUS TEST TYPES Test Type Goal Scope When to Perform Time Required Resources Required Smoke test Verify if critical functionalities work after changes Broad and shallow After code changes — build Quick — minutes to a few hours Minimal Sanity test Quick check to verify if major functionalities work Focused and narrow After smoke test Quick — minutes to a few hours Minimal Regression test Ensure new changes do not negatively impact existing features Comprehensive — retests everything After code changes — build or deployment Moderate — several hours to a few days Moderate Performance test Evaluate software's responsiveness, stability, and scalability Load, stress, and scalability tests Toward end of development cycle or before production release Moderate — several hours to a few days Moderate Security test Identify and address potential vulnerabilities and weaknesses Extensive security assessments Toward end of development cycle or before production release Moderate to lengthy — several days to weeks Extensive Table 2 According to the State of Test Automation Survey 2022, the following types of automation tests are preferred by most developers and testers because they have clear pass/fail results: Functional testing (66.5%) API testing (54.2%) Regression testing (50.5%) Smoke testing (38.2%) Maintain Your Test Suite Next, regularly maintain the automated test suite to match it to changing requirements and the codebase. An easy way to do this is to integrate automated testing with version control systems like Git. This way, you can maintain a version history of test scripts and synchronize your tests with code changes. Additionally, make sure to document every aspect of the CI/CD pipeline, including the test suite, test cases, testing environment configurations, and the deployment process. This level of documentation helps team members access and understand the testing procedures and frameworks easily. Documentation facilitates collaboration and knowledge sharing while saving time in knowledge transfers. Conclusion Automated testing processes significantly reduce the time and effort for testing. With automated testing, development teams can detect bugs early, validate changes quickly, and guarantee software quality throughout the CI/CD pipeline. In short, it helps development teams to deliver quality products and truly unlock the power of CI/CD. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report

By Lipsa Das CORE
Selecting the Right Automated Tests
Selecting the Right Automated Tests

This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report Modern software applications are complex and full of many dynamic components that generate, collect, and fetch data from other components simultaneously. If any of these components acts unexpectedly or, worse, fails, there can be a cascading effect on all other dependent components. Depending on the nature of the software, these errors or failures can result in system downtime, financial loss, infrastructure collapse, safety implications, or even loss of life. This is why we test and monitor software. Testing with the right techniques and test cases at the right stages in the software lifecycle increases the chances of catching problems early and before users do. When and Where to Test Generally, tests occur in the "testing" stage of the software development lifecycle (SDLC). However, for certain types of tests, this is not the case, and when you implement and run, each test type can vary. Before we get into selecting the right test, let's quickly review when and where to use different types of tests. THE COMMON TYPES OF TESTS Test Type What It Identifies SDLC Stage Implementation Options Unit Unexpected or missing function input and output Development, testing Defined in code, typically with language libraries API and integration Integrations with third-party services Development, deployment, testing Defined in code, typically with language and other libraries needed for the integration UI Functional interactions with the user interfaces Testing Specialized testing frameworks Security Vulnerabilities and attack vectors Development, testing, deployment, maintenance Specialized testing frameworks Performance Key application metrics Deployment, maintenance Metric-dependent tools Smoke If an application still functions after a build Testing, deployment Specialized testing frameworks Regression If new code breaks old code Testing, deployment Specialized testing frameworks How To Choose Tests As with many technical projects, reading a list of recommendations and best practices is only the beginning, and it can be difficult to decide which of those apply to your use case. The best way is to introduce an example and show the reasoning behind how to decide a strategy based on that use case. It won't match any other use case exactly but can help you understand the thought process. Example Application I have a side project I am slowly trying to build into a full application. It's a to-do aggregator that pulls tasks assigned to me from a variety of external services and combines them into one easier-to-view list. It uses the APIs of each of these services to fetch assigned task data. Users can sort and filter the list and click on list items to see more details about the task. The application is written in TypeScript and React and uses material UI. Additionally, there are mobile and desktop versions created with React native. Essential Tests Unless you have a good reason not to include them, this section covers tests that are essential in an application test suite. Figure 1: Essential tests in the SDLC Unit Tests Essential for almost any application and possible to create as you build code, any application that has more than one functional component needs unit tests. The example application has one component that takes the data returned from the APIs and converts it to React objects ready for rendering in the UI. Some examples of unit tests in this example could be: Determining whether there are objects to render Checking if the objects have essential data items to render (for example, the title) Determining if the UI is ready for objects to be rendered to it As the application uses Typescript, there are many options available for writing unit tests, including Jest, Mocha, and Jasmine. They all have advantages and disadvantages to the ways they work, with no real "right answer" as to which is best. Jest is possibly the most popular at the moment and was created by Facebook to unit test React. The example application is based on React, so perfect! API and Integration Tests The example application relies heavily on multiple APIs that have multiple points of failure with the potential to render the application unusable if handled poorly. API and integration tests are not quite the same as each other. While API testing tests only API interactions, integration testing could test API tests but also other third-party integrations, such as external components. As the example application's only third-party integration are APIs, we can consider them the same. Some examples of API errors to test for could be: Expired or changes to authentication methods A call that returns no data A call that returns unexpected data Rate limiting on an API call API tests typically happen externally to the application code, in an external tool, or in a CI pipeline. Open-source options include writing your own tests that call the API endpoints, SoapUI (from the same people that define the API spec standard), Pact, and Dredd. Personally, I tend to use Dredd for CI tests, but there is no obvious choice with API testing. UI Tests If an application has a visual front end, that front end needs automated tests. These tests typically simulate interactions with the interface to check that they work as intended. The UI for the example application is simple but essential for user experience, so some example tests could include checking whether: Scrolling the list of aggregated tasks works Selecting a task from the list opens more details Tools for automated UI testing are typically run manually or as part of a CI process. Fortunately, there are a lot of mature options available, some of which run independently from the programming language and others as part of it. If your application is web-based, then generally, these tools use a "headless browser" to run tests in an invisible browser. If the project is a native application of some flavor, then UI testing options will vary. The example project is primarily web-based, so I will only mention those options, though there are certainly more available: Selenium – a long-running tool for UI testing and is well-supported Puppeteer – a mature UI testing tool designed for Node.js-based projects For the example application, I would select a tool that is well suited to TypeScript and React, and where tests are tightly coupled to the underlying UI components. Optional Tests This section deals with test types to consider if and when you have resources available. They will help improve the stability and overall user experience of your applications. Figure 2: Optional tests in the SDLC Security Security is a more pressing issue for applications than ever. You need to check for potentially vulnerable code during development and also the increasing problem of introducing vulnerabilities through package dependencies. Aside from testing, generating and maintaining lists of external packages for software supply chain reasons is a rapidly growing need, with possible regulatory requirements coming soon. Some examples of vulnerability issues to test for could be: Storing API credentials in plain text Sending API credentials unencrypted Using vulnerable packages There are two groups of tools for testing these requirements. Some handle scanning for vulnerabilities in your code and external code, while others handle one of those roles. Vulnerability scanning is a new growth business for many SaaS companies, but some popular open-source and/or free options include, but are not limited to, GitHub, Falco, and Trivy. These tools are programming-language independent, and your decision should be based on the infrastructure you use behind the application. The example application runs on a user's device locally, so the best time to run a vulnerability checker would be in CI and CD during the build process. Performance Tests There is little point in creating a finely crafted application without any kind of monitoring of how well it performs in the hands of users. Unlike most of the other test types on the list, which typically run at distinct phases in the SDLC, performance testing generally happens continuously. Some tools let you mock production usage with simulated load testing, and this section includes some of those, but they are still not the same as real users. Possible issues to monitor are: Speed of populating task results Critical errors, for example, API changes in between builds Slow UI responses As performance monitoring often needs a centralized service to collate and analyze application data, these tools tend to be commercial services. However, there are some open-source or free options, including k6 (for mocking), sending React <Profiler> data into something like Grafana, and Lighthouse CI. Smoke Tests A lot of other testing methods test individual functionality or components but not paths through how these fit together and how users use an application. Smoke tests typically use a quality assurance (QA) environment to check that key functionality works in new builds before progressing to further tests. Smoke tests can be manually undertaken by a QA team or with automated tools. The tool options depend on what it is you want to test, so many of the other tools featured in this article can probably help. For the example application, a smoke test would check that the list of aggregated tasks is generated. Regression Tests Regression testing isn't a set of tools but a best-practice way of grouping other tests to ensure that new features don't have an adverse effect on an application. For example, a new release adds the ability to change the status of tasks aggregated in the application, sending the status back to the source task. The other following tests would work together to ensure that introducing this new feature hasn't negatively affected the existing functionality, which was only to view aggregated tasks. Some examples of regression test grouping are the following: Unit tests for the new feature API tests for updating items on the relevant service API Security tests to ensure that calling the new APIs doesn't reveal any sensitive information Performance tests for the new feature and to check that the new feature doesn't affect reloading the task list Conclusion This article covered the many different types of tests an application can implement and the kinds of issues they can prevent. All of these issues have the potential to hinder user experience, expose users to security issues, and cause users not to want to use your application or service. As you add new features or significantly change existing features, you should write relevant tests and run them as frequently as convenient. In many modern SDLC processes, tests typically run whenever developers check in code to version control, which you should also do frequently. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report

By Chris Ward CORE
Automated Testing Lifecycle
Automated Testing Lifecycle

This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report As per the reports of Global Market Insight, the automation testing market size surpassed $20 billion (USD) in 2022 and is projected to witness over 15% CAGR from 2023 to 2032. This can be attributed to the willingness of organizations to use sophisticated test automation techniques as part of the quality assurance operations (QAOps) process. By reducing the time required to automate functionalities, it accelerates the commercialization of software solutions. It also offers quick bug extermination and post-deployment debugging, and it helps the integrity of the software through early notifications of unforeseen changes. Figure 1: QAOps cycle What Is the Automated Testing Lifecycle? The automation testing lifecycle is a multi-stage process that covers the process of documentation, planning, strategy, and design. The cycle also involves development of the use cases using technology and deploying it to an isolated system that could run on specific events or based on a schedule. Phases of the Automated Testing Lifecycle There are six different phases of the automated testing lifecycle: Determining the scope of automation Architecting the approach for test automation (tools, libraries, delivery, version control, CI, other integrations) Setting the right test plan, test strategy, and test design Automation environment setup Test script development and execution Analysis and generation of test reports Figure 2: Automated testing lifecycle Architecture Architecture is an important part of the automation lifecycle that leads to defining the strategy required to start automation. In this phase of the lifecycle, the people involved need to have a clear understanding of the workflows, executions, and required integrations with the framework. Tools of the Trade In today’s automation trends, the new buzzword is "codeless automation," which helps accelerate test execution. There are a few open-source libraries as well, such as Playwright, which use codeless automation features like codegen. Developing a Framework When collaborating in a team, a structured design technique is required. This helps create better code quality and reusability. If the framework is intended to deliver the automation of a web application, then the team of automation testers need to follow a specific design pattern for writing the code. Execution of Tests in Docker One important factor in today’s software test automation is that the code needs to be run on Docker in isolation every time the test runs are executed. There are a couple of advantages to using Docker. It helps set up the entire testing environment from scratch by removing flaky situations. Running automation tests on containers can also eliminate any browser instances that might have been suspended because of test failures. Also, many CI tools support Docker through plugins, and thus running test builds by spinning a Docker instance each time can be easily done. Continuous Testing Through CI When it comes to testing in the QAOps process, CI plays an important role in the software release process. CI is a multi-stage process that runs hand in hand when a commit is being made to a version control system to better diagnose the quality and the stability of a software application ready for deployment. Thus, CI provides an important aspect in today’s era of software testing. It helps to recover integration bugs, detect them as early as possible, and keep track of the application's stability over a period of time. Setting up a CI process can be achieved through tools like Jenkins and CircleCI. Determining the Scope of Test Automation Defining the feasibility for automation is the first step of the automation lifecycle. This defines the scope and automates the required functionality. Test Case Management Test case management is a technique to prioritize or select the broader scenarios from a group of test cases for automation that could cover a feature/module or a service as a functionality. In order to ensure the top quality of products, it is important that complexity of test case management can scale to meet application complexity and the number of test cases. The Right Test Plan, Test Strategy, and Test Design Selecting a test automation framework is the first step in the test strategy phase of an automated testing lifecycle, and it depends on a thorough understanding of the product. In the test planning phase, the testing team decides the: Test procedure creation, standards, and guidelines Hardware Software and network to support a test environment Preliminary test schedule Test data requirements Defect tracking procedure and the associated tracking tool Automation Environment Setup The build script to set up the automation environment can be initiated using a GitHub webhook. The GitHub webhook can be used to trigger an event in the CI pipeline that would run the build scripts and the test execution script. The build script can be executed in the CI pipeline using Docker Compose and Docker scripts. docker-compose.yml: version: "3.3" services: test: build: ./ environment: slack_hook: ${slack_hook} s3_bucket: ${s3_bucket} aws_access_key_id: ${aws_access_key_id} aws_secret_access_key: ${aws_secret_access_key} aws_region: ${aws_region} command: ./execute.sh --regression Dockerfile FROM ubuntu:20.04 ENV DEBIAN_FRONTEND noninteractive # Install updates to base image RUN apt-get -y update && \ apt-get -y install --no-install-recommends tzdata && \ rm -rf /var/lib/apt/lists/* # Install required packages ENV TZ=Australia/Melbourne RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN dpkg-reconfigure --frontend noninteractive tzdata RUN apt-get -y update && \ apt-get install -y --no-install-recommends software-properties-common \ apt-utils \ curl \ wget \ unzip \ libxss1 \ libappindicator1 \ libindicator7 \ libasound2 \ libgconf-2-4 \ libnspr4 \ libnss3 \ libpango1.0-0 \ fonts-liberation \ xdg-utils \ gpg-agent \ git && \ rm -rf /var/lib/apt/lists/* RUN add-apt-repository ppa:deadsnakes/ppa # Install chrome RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - RUN sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/ google-chrome.list' RUN apt-get -y update \ && apt-get install -y --no-install-recommends google-chrome-stable \ && rm -rf /var/lib/apt/lists/* # Install firefox RUN apt-get install -y --no-install-recommends firefox # Install python version 3.0+ RUN add-apt-repository universe RUN apt-get -y update && \ apt-get install -y --no-install-recommends python3.8 \ python3-pip && \ rm -rf /var/lib/apt/lists/* RUN mkdir app && mkdir drivers # Copy drivers directory and app module to the machine COPY app/requirements.txt /app/ # Upgrade pip and Install dependencies RUN pip3 install --upgrade pip \ -r /app/requirements.txt COPY app /app COPY drivers /drivers # Execute test ADD execute.sh . RUN chmod +x execute.sh ENTRYPOINT ["/bin/bash"] Seeding Test Data in the Database Seed data can be populated for a particular model or can be done using a migration script or a database dump. For example, Django has a single-line loader function that helps seeding data from a YML file. The script to seed the database can be written in a bash script and can be executed once every time a container is created. Take the following code blocks as examples. entrypoint.sh: #!/bin/bash set -e python manage.py loaddata maps/fixtures/country_data.yaml exec "$@" Dockerfile FROM python:3.7-slim RUN apt-get update && apt-get install RUN apt-get install -y libmariadb-dev-compat libmariadb-dev RUN apt-get update \ && apt-get install -y --no-install-recommends gcc \ && rm -rf /var/lib/apt/lists/* RUN python -m pip install --upgrade pip RUN mkdir -p /app/ WORKDIR /app/ COPY requirements.txt requirements.txt RUN python -m pip install -r requirements.txt COPY entrypoint.sh /app/ COPY . /app/ RUN chmod +x entrypoint.sh ENTRYPOINT ["/app/entrypoint.sh"] Setting up the Workflow Using Pipeline as Code Nowadays, it is easy to run builds and execute Docker from CI using Docker plugins. The best way to set up the workflow from CI is by using pipeline as code. A pipeline-as-code file specifies actions and stages for a CI pipeline to perform. Because every organization uses a version control system, changes in pipeline code can be tested in branches for the corresponding changes in the application to be deployed. The following code block is an example of pipeline as code. config.yml: steps: - label: ":docker: automation pipeline" env: VERSION: "$BUILD_ID" timeout_in_minutes: 60 plugins: - docker-compose#v3.7.0: run: test retry: automatic: - exit_status: "*" limit: 1 Checklist for Test Environment Setup Test data List of all the systems, modules, and applications to test Application under test access and valid credentials An isolated database server for the staging environment Tests across multiple browsers All documentation and guidelines required for setting up the environment and workflows Tool licenses, if required Automation framework implementation Development and Execution of Automated Tests To ensure test scripts run accordingly, the development of test scripts based on the test cases requires focusing on: Selection of the test cases Creating reusable functions Structured and easy scripts for increased code readability Peer reviews to check for code quality Use of reporting tools/libraries/dashboards Execution of Automated Tests in CI Figure 3 is a basic workflow that defines how a scalable automation process can work. In my experience, the very basic need to run a scalable automation script in the CI pipeline is met by using a trigger that would help set up the test dependencies within Docker and execute tests accordingly based on the need. Figure 3: Bird's eye view of automation process For example, a test pipeline may run a regression script, whereas another pipeline may run the API scripts. These cases can be handled from a single script that acts as the trigger to the test scripts. execute.sh: #!/bin/bash set -eu # Check if csv_reports, logs directory, html_reports, screenshots is present mkdir app/csv_reports app/logs mkdir app/html_reports/screenshots # Validate that if an argument is passed or not if [ $# -eq 0 ]; then echo "No option is passed as argument"; fi # Parse command line argument to run tests accordingly for i in "$@"; do case $i in --regression) pytest -p no:randomly app/test/ -m regression --browser firefox --headless true --html=app/html_reports/"$(date '+%F_%H:%M:%S')_regression".html --log-file app/logs/"$(date '+%F_%H:%M:%S')".log break ;; --smoke) pytest app/test -m smoke break ;; --sanity) pytest app/test -m sanity --browser chrome --headless true --html=app/html_reports/ sanity_reports.html --log-file app/logs/"$(date '+%F_%H:%M:%S')".log break ;; --apitest) npm run apitest break ;; --debug) pytest app/test -m debug --browser chrome --headless true --html=app/html_reports/ report.html --log-file app/logs/"$(date '+%F_%H:%M:%S')".log break ;; *) echo "Option not available" ;; esac done test_exit_status=$? exit $test_exit_status Analysis of Test Reports By analyzing test reports, testing teams are able to determine whether additional testing is needed, if the scripts used can accurately identify errors, and how well the tested application(s) can withstand challenges. Reports can be represented either using static HTML or dynamic dashboard. Dashboards can help stakeholders in understanding trends in the test execution by comparing the current data with the past data of execution. For example, allure reporting creates a concise dashboard with the test outcomes and represents it using data collected from test execution. Conclusion Automated testing lifecycle is a curated process that helps testing applications meet specific goals within appropriate timelines. Furthermore, it is very important for the QAOps process to gel properly with the SDLC and rapid application development. When completed correctly, the six phases of the lifecycle will achieve better outcomes and delivery. Additional Reading: Cloud-Based Automated Testing Essentials Refcard by Justin Albano "Introduction to App Automation for Better Productivity and Scaling" by Soumyajit Basu This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report

By Soumyajit Basu CORE

Culture and Methodologies

Agile

Agile

Career Development

Career Development

Methodologies

Methodologies

Team Management

Team Management

Adopting Agile Practices for Workforce Management: Benefits, Challenges, and Practices

September 25, 2023 by Sandeep Kashyap

Bad Software Examples: How Much Can Poor Code Hurt You?

September 24, 2023 by Michał Matłoka

Operational Testing Tutorial: Comprehensive Guide With Best Practices

May 16, 2023 by Harshit Paul

Data Engineering

AI/ML

AI/ML

Big Data

Big Data

Databases

Databases

IoT

IoT

Implementing Stronger RBAC and Multitenancy in Kubernetes Using Istio

September 25, 2023 by Debasree Panda

How to Deploy a Startup Script to an Integration Server Running in an IBM Cloud Pak for Integration Environment

September 25, 2023 by Dave Crighton

Explainable AI: Making the Black Box Transparent

May 16, 2023 by Yifei Wang

Software Design and Architecture

Cloud Architecture

Cloud Architecture

Integration

Integration

Microservices

Microservices

Performance

Performance

Implementing Stronger RBAC and Multitenancy in Kubernetes Using Istio

September 25, 2023 by Debasree Panda

How to Deploy a Startup Script to an Integration Server Running in an IBM Cloud Pak for Integration Environment

September 25, 2023 by Dave Crighton

Low Code vs. Traditional Development: A Comprehensive Comparison

May 16, 2023 by Tien Nguyen

Coding

Frameworks

Frameworks

Java

Java

JavaScript

JavaScript

Languages

Languages

Tools

Tools

Adopting Agile Practices for Workforce Management: Benefits, Challenges, and Practices

September 25, 2023 by Sandeep Kashyap

Implementing Stronger RBAC and Multitenancy in Kubernetes Using Istio

September 25, 2023 by Debasree Panda

Scaling Event-Driven Applications Made Easy With Sveltos Cross-Cluster Configuration

May 15, 2023 by Gianluca Mardente

Testing, Deployment, and Maintenance

Deployment

Deployment

DevOps and CI/CD

DevOps and CI/CD

Maintenance

Maintenance

Monitoring and Observability

Monitoring and Observability

Implementing Stronger RBAC and Multitenancy in Kubernetes Using Istio

September 25, 2023 by Debasree Panda

How to Deploy a Startup Script to an Integration Server Running in an IBM Cloud Pak for Integration Environment

September 25, 2023 by Dave Crighton

Low Code vs. Traditional Development: A Comprehensive Comparison

May 16, 2023 by Tien Nguyen

Popular

AI/ML

AI/ML

Java

Java

JavaScript

JavaScript

Open Source

Open Source

How to Deploy a Startup Script to an Integration Server Running in an IBM Cloud Pak for Integration Environment

September 25, 2023 by Dave Crighton

Hugging Face Is the New GitHub for LLMs

September 24, 2023 by Arvind Bhardwaj

Five IntelliJ Idea Plugins That Will Change the Way You Code

May 15, 2023 by Toxic Dev

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: