Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Continuous Testing Live: Automate Everything in DevOps? Not So Fast

DZone's Guide to

Continuous Testing Live: Automate Everything in DevOps? Not So Fast

Ingo Philipp and Adam ''The Automator'' Bertram talk about the rise of test automation and DevOps in modern software development.

· DevOps Zone ·
Free Resource

Easily enforce open source policies in real time and reduce MTTRs from six weeks to six seconds with the Sonatype Nexus Platform. See for yourself - Free Vulnerability Scanner. 

"If testers are curious enough and they get in there and poke around and not just follow, in this case, the software testing test case, and they see connections, see different scenarios and things that have been come up with, that helps them know the product more, which makes them much better testers, too." -Adam Bertram

On this week's episode of Continuous Testing Live, Ingo Philipp and Adam "The Automator" Bertram share their thoughts on the increasing presence of test automation and DevOps in modern software delivery lifecycles. While there are certainly cries to "automate everything" to get the most ROI, there's one software testing practice that shouldn't be automated or ignored.

Subscribe today to Continuous Testing Live so you never miss a single episode! Now available at iTunes, Google Play, and SoundCloud.

Noel: I wanted to kick this conversation off by filling in the audience on how it came to be. I thought it was a cool story as to how we all got introduced to each other. Ingo was going to be presenting a webinar of ours about manual testing's role in DevOps. And then Adam Bertram, who is maybe more-often known as "Adam the Automator," saw a tweet Tricentis sent out about this webinar and raised a point about "Why in the world would there still be any need for manual testing in DevOps when the goal of DevOps is to automate everything?

I immediately tried to figure out where Adam was coming from, and I started to wonder if it was the case of different teams-like testers and developers-working in silos and having so much work to do that you don't always know what another department is working on, or maybe even what they call something. And what this confusion in this example came from was that we had different understandings as to what manual testing is, when it's a good idea, and when it's not. It created this whole conversation the three of us had before this podcast where we actually didn't end up having anything to really disagree with. We all just learned where the other side was coming from and why they felt the way they did.

I'd love to, let's start with you Adam, with where, at least before this conversation, where you associated manual testing, and what you thought of as being included in manual tests.

Adam: I've always said the M word is the bad word. I call it the M word because I'm so focused on automation. The first thing I saw there, I saw manual testing like, "Oh no, no, no, no." My gut reaction is, "No, no, no, no, we don't want that." I originally just balked at that because it's like, "Well everything can be automated."

My initial reaction to that was, my thought for manual testing is the testing that I have seen. I'm an automation engineer. I'm on more of the development team, and I've worked with QA and QE teams and lots of testers over the years. What my thought with manual testing was when a development team ships a product and they deploy to a test environment and then QA goes in and then manually brings up browser, clicks this, clicks this, moves a mouse around, clicks this, notices that this is doing what it was supposed to do, moves the mouse around again, does this. It just takes forever and a day because they have to click each UI element and manually go through it. A human has to be physically present, clicking and typing things and filling in forms to actually make that work.

Through my experience, I know I'm not a software testing guru by any means. I'm on the automation side, but I've seen people use tools like Selenium and even scripts and things to automatically fill in forms, grab input, automatically put in service tickets and things like that to give feedback to the development team during that. So, my first impression of manual testing was, "Oh, no, no, no. There's no way. I do not want anybody ... There's no reason for anybody to go in and just manually start clicking around the mouse and typing in fields and stuff."

I've seen some people even create a Word document of, "Okay, I did this, then I did this," like a journal thing about, "Oh that's definitely not going to work." That was my initial take on what you meant by manual testing.
Ingo: Absolutely. I just want to highlight the way this conversation started. You just scratched the surface, Noel. For me, it's just a funny fact because when we were tweeting about the role of manual testing in DevOps, when we were promoting our webinar, then Adam you immediately replied, "This is so wrong on so many levels," right?

Adam: Yeah, I was very adamant about it.

Ingo: The good thing about this is that you didn't just state this without any further reasoning. You really provided arguments. In that case, your ... How should I say? Your skepticism definitely goes hand-in-hand with rationality. I think that's crucial since the discussion we had afterward was then really based on healthy doubts, on thoughtful inquiries I would say. It wasn't based on strict denial, and that just said one thing to me. Your skepticism is definitely strong, Adam, but your curiosity is even stronger. That's the reason why I'm so happy to have you on the show.

Adam: Yeah, that's a really good point. One thing I really hate is for people to say the popular quote, "It's the way things have always been done." They deny any of the ways... That just stunts growth and stunts your ability to learn more. I hate that approach. You're right. I'm very passionate and adamant about things I feel strongly against, but you're right. I never really thought about that way. You're right; curiosity really trumps everything in my book.

Ingo: Absolutely. To me, you expressed two ingredients a great tester should have, and that was skepticism on the one hand and curiosity on the other hand.

That is what makes for me a great tester, what makes a tester hard to fool at the end of the day. Having the ability to doubt and to ask that these are two attributes that must be ... I should say ... at the fundamental part of every tester's soul. Because when you doubt and ask, then it gets a little harder to believe. You really brought that up, and for that, I just want to say thank you.

Adam: Good, no problem. I know that something that I know that would also be beneficial to a tester is you should never assume anything. That's one thing. Another thing I hate. "Well I assumed that this functionality was working," or, "I assumed that as a good tester, you have to..." You can't assume anything. You have to just go through there and verify yourself, the whole "trust but verify" statement. "I trust you, but I'm going to verify what you're saying is actually true."

Ingo: Yeah, absolutely, absolutely. Can't agree more with that.

Noel: Deciding which manual testing should stay a manual process and which testing should or not just can be automated, but should be automated, Adam, I can see your point of view of what things you would be looking to automate, like some of the things you just described there. Then Ingo, as testers are looking for what needs to be automated, what should stay manual, how do those decisions come to be? What goes into the decision of "We've decided we're going to automate this set of tests or this suite of tests?" How do you decide to do that?

Ingo: Well, in our projects, these decisions get made based on risk. For example, when a new user story gets into the game, we start modeling these bits of functionality by user scenarios. While doing so, ask ourselves what are the scenarios that contribute most to the overall risk? Since risk in its most fundamental form just quantifies the potential of losing something of value, we try to identify these scenarios that have the highest potential of losing something of value. For example, from a financial perspective.

Now, this means that we try to identify user scenarios that are most probably carried out most often in the production system by certain stakeholders, by certain end users. The usage frequency of these scenarios is one dimension of our risk assessment, and so it's one basis for making this decision. The other dimension is the potential damage. We don't only ask ourselves how often a certain user scenario will be processed in production, we also want to know what would be the potential damage that would result when this user scenario wouldn't work. The potential damage is the second dimension of our risk assessment.

Once we identify these user scenarios and estimated their risk contribution, we move on and create test cases to model these user scenarios that contribute most to the overall risk. Then we automate these test cases and then we embed these test cases into our automated CI and CD pipeline. Of course, this decision is also based on the effort it takes to automate these test cases. What you're trying to do is to find the right balance between the effort it takes to automate certain scenarios and the value in terms of risk mitigation we create through test automation. That's in rough terms how we decide what to automate and what not to automate that resonates with you, Adam.

Adam: Yeah, I agree with you completely. The risk thing, that's something I've really never thought about. I take whether to decide to automate something because I'm not necessarily in the testing realm, but more of the general automation part as part of operations and IT in general. I automate just about everything around me, and I think of automation as to the best task to automate something is the task that's going to be repeated the most often, which, in this case, we're talking about testing. I'm sure that you have a test case. You've not going to just run a test case once. You're going to run that for every build that comes across that you have to test. I think in software testing, this is what makes automation so key because you have a simple task as part of that test case that you have to run for every single build to really get the quality up.

Second, like you were saying, the ROI on "Is it worth it to automate something?" In the software testing realm, I think the ROI is great because you're going to have to perform that same test case for every single build like I said. At the same time, you're going to have to ... That case, like you said, there's going to be huge risk because you perform that testing upfront in the test environment, and you know that that's going to eventually go in production. So that risk is going to be a lot higher.

Plus, another advantage of deciding if you want to go with automation is because not only the repetitiveness but like you said, how important is it? How much time does it take for you to automate it, take the time upfront, to then save that time down the road?

Just talking about any kind of general automation, even in manufacturing, anything, it always takes a lot of time, a huge chunk of time upfront to then be able to save that time down the road. Does it make sense to spend those resources upfront, and time upfront to build that kind of automation framework, and everything upfront and spend so much more time to eventually know you're going to get it done in the end? That doesn't work for some teams because some teams are so strapped for time. They don't have very many people at all on the testing team that they really ... I hate to say it; they just don't have time to automate. You have to keep the lights on. They have to just keep the pipeline flowing and they just simply can't carve out the resources to automate it.

It's a terrible situation that people get into there because they're never going to catch up.

Ingo: Yeah, absolutely. I wouldn't call it a terrible situation. I would call it a challenging situation.

Adam: There you go. You're more politically correct.

Ingo: You're right, Adam. I absolutely agree because the time needed for testing is always infinitely larger than the time available. As you've mentioned, you will never have the time, and you will never have the resources, and also the budget to test everything full exhaustedly down to the very last detail. What you can do is you can always test as much as possible.

The only decision I would say we can make at the end of the day is, "How much risk leftover are we willing to accept when we release our software products?" That's what we do by applying such a "risk-based testing approach." What we do is, we align the risk objectives of our stakeholders with our testing, but also our test automation activities. I think that's the big, big benefit that brings such a risk-based testing approach to test automation itself.

Adam: I like that. I like that you base what to automate based on the risk. If it's a really, really high risk, that's something we have to automate right now because when the automation is going to naturally reduce human error, we don't want to increase the risk by having 10 people in with their hands in the cookie jar trying to do whatever they are going to do. I really like that risk-based approach.

Ingo: Yeah, absolutely. The bottom line here is that we don't just want to do the things right in terms of just doing them faster, in terms of doing more and more automation. During product testing, what we want to do is we want to do the right things right at the same time. This means we don't just want to move fast. We want to move fast in the right direction, to keep up with the rapid pace of software development. That's how we add quality to test automation. That's how we add quality to speed. In simple terms, that is how we translate quality and speed into reality during every single iteration. That's the big secret.

Adam: That makes complete sense to me. You have a really good point.

Noel: Let's talk about ROI a little bit. One of my questions has been answered really well already, but we talked about the fear of disrupting, keeping the lights on kind of thing by keeping things the way that they've always been done. Sometimes there's some fear of change in that, but when looking at what to automate, and you look at the risk involved. You determine that you're going to be able to automate something, but you want to know what you're going to get out of it. How do you set realistic goals for our ROI once you've evaluated that you can afford the risk it's going to take to automate something? Where are some of those areas that you're going to be able to not just look for a return of investment, but to measure it as well?

Adam: I can take this first. I think that automation is probably the easiest area to measure ROI because ... Let me take that back. I'm going off on a tangent here ... This reminds me of something that was recently in the news about Tesla, and I don't know if you guys saw that where Elon Musk said humans are underrated and automation is overrated or something because they couldn't release their Tesla Model 3 models fast enough.

Noel: Musk said they tried to "automate too much," or something like that.

Adam: Yeah, they tried to automate too much. I saw a few articles that said Tesla differed from traditional car manufacturers because they tried to automate everything that the traditional car manufacturer does, but also final assembly. The problem with that is what Elon Musk was alluding to was they didn't seem to have the process down ahead of time before they automated. Toyota and all the Japanese car manufacturers is another area I'm interested in a lot. They take the approach of, "I need to understand the process first. Manually go through and figure out where all the pieces go. And figure out the whole process manually, and build an automation document or an idea of how this process is going to go, then introduce automation.

I think Tesla went the other way and said, "Well, no, no, no. We don't need all the manual stuff." They just went whole-hog right into automation without actually understanding when he said automation is underrated, they automated too much stuff. That's a fallacy to me. In my opinion, you can't automate too much stuff. You automate it too quickly, you didn't understand the process ahead of time before you actually introduced automation.

Yeah, yeah. ROI I think is the easiest to measure. The reason I went on that tangent is because they didn't have the manual process. They didn't know what it took to do it manually, so they didn't know what resources that needed to be done ahead of time to produce a Model 3. They just threw in robots all of a sudden and tried to automate a process that they didn't know how to manually do it with humans. If they would have done that, they would have been able to measure a metric. How long does it take for a Model 3 to go from the start of the line, into a line and final assembly?

If they would have had humans, they would have said, "Well looks like after 100 cars, the average time it takes, we have five humans working for 40 hours a week. Each of these humans is making 20, 30, whatever it is, dollars an hour. You add all those up. It took these humans to do this. Then you go into the labor expenses and how many labor resources that takes to do that." Then also everything all the other expenses that take ... In my opinion, it's really easy to measure ROI for automation if you know how to manually do it and if you had actually tracked how long that manually takes. Because if you say, "Well it takes 5 humans 40 hours a week to do this 200 working hours, and I pay my employees X number of hours, it cost me X number of dollars labor wise to do this." Then you calculate that or time and then you say, "Well after six months, it's going to be X dollars. I know for a fact that it's going to be this due to our current processes."

Then you can go back to robots or whatever other way you're going to introduce automation and say, "Well if it takes me $10,000 over six months to build this and it's going to take me $100,000 in investment to automate this process upfront, but then once it's done, then it's only going to cost ... If best case scenario, if there's going to be no humans involved in the process, I can just completely cut out that $10,000 and it's going to cost me $1000 to get one. I think it's really easy to measure ROI if you know how to manually do things and if you manually track how that process goes in the first place.

Noel: That makes me think that maybe Elon should have said, "We poorly automated too much"

Adam: There you go.

Noel: And not just blaming it on a blanket automation.

Adam: Correct.

Ingo: I can give you a concrete example because about one and a half years ago we started off a project in the banking sector. It was all about securities trading. We also calculated the ROI of test automation there. I just want to quickly move you through that because I think it's really impressing how fast test automation really can pay off. Now the project was composed of about 11 manual testers. They had to execute and to maintain about 5000 regression test cases. It was scattered across eleven 11 different technologies and platforms, so quite expansive test cases. It took those tests about 10 weeks to just execute that set of test cases.

Now after we optimized that test case based on our risk-based testing approach, and when we applied structure and methodic test design, then we created a solid basis for test automation. Just by doing test automation, we were able to drill down the entire regression test time to 8 hours, and that's huge. That is what we managed to do in about three months by just 2 persons. It means 2 manual testers, from 10 weeks minimal regression time down to 8 hours by 2 people. When you calculate the return on investment there, you will see that automation, that the entire investment into automation already paid off after the very first regression run. That's pretty impressive in my point of view how fast that can be.

Adam: I agree. That's one thing I think automation it's a no-brainer. If the process is done often enough and you're able to still keep the lights on and keep doing things like you've always done, but automate at the same time, it's a no-brainer to me.

Ingo: Yeah, absolutely. It's not just that you can test your product faster. It's more about testing your product more frequently so that you have multiple feedback loops. When we automate as mentioned, we apply risk-based testing to that, and we focus on the main functionality, on the basic functionality. Usually, it turns out that with just a low number of test cases, you can already do a lot. That means you can already cover a lot of business risk. The savings you have, also the cost savings you have is just dramatic, and that is what you can achieve in a couple of days or even a couple of weeks I would say.

Noel: We talked a little bit about how we decide or how developers or how testers decide what to automate by looking at risk and looking at how hard is it going to be to get this up and running, how hard is it going to be to maintain, things like that. Adam, you had written an article for CIO.com talking about DevOps and saying that, "For a deployment pipeline, it should integrate continuous integration, continuous development, continuous testing and continuous deployment into a single entity," which sounds wonderful and also sounds really difficult.

As you're looking at things like, "What's going to save us some time, what's going to free up some hours here," Where are you then able to look bigger picture with a deployment pipeline like that? Let's assume someone's got none of this in place, yet they know they need it as they're trying to build something like a deployment pipeline, not just save some hours on testing or just trying to get rid of a manual task by creating a Word doc. Where do the decisions get made for automation, for things like that when you've got a goal that big of getting to, what some would call, real DevOps?

Adam: A goal that big, as you said, it's a big undertaking, especially for an organization that doesn't have anything to begin with. I would recommend in the first place something that it seems so obvious that so many companies don't have. Do you know what you're doing today? Do you have any documentation? Can you hand over your current workflow to somebody else that has no idea what you're doing and can they do it from start to finish exactly the same way you did manually?

That's definitely the very first step which is understanding what you're doing today and can offload that to somebody else. We do that. My current position, we have various workflows in place to where we don't have good documentation. It depends on me or it depends on somebody else, a specific subject matter expert to do that.

One of our roles is we're trying to extract all the knowledge out of people's heads and put it into some documentation, some portal or in a Word document or something to hand over to somebody overseas to say, "Here, now we've done all of the design and architecture and we've boiled all this down, all this stuff, knowledge in my head how to deploy this software. Here, now all you have to do is read this Word document and just follow down one, two, three, four, five and be able to replicate it there." By far and away, that is the very first step. Just like Elon Musk, don't try to automate right away and build this build pipeline before you can understand your current process.

Then once you do that, you go to the next phase of once you understand the core process, then you can start building some automation around this. We're not even talking about the pipeline yet, so I'm talking about ... Obviously, I'm sure you have source control and I'm sure that your developer checks something into a repo somewhere, and then maybe then they send an email to the testers and say, "Here, can you test this thing?"

Your next step is to link the two. You understand what you're doing and the tester understands what they're doing. First, you link the two. You go from no automation to your check it in and your build tool automatically, then maybe even as an interim step, maybe automatically will send that email. It's just baby steps. "Hey I saw all the build tools," said, "Hey, this developer checked this piece of code in. Now can you start testing, send all the parameters of what they need to do." That's the next step and maybe then say, "Build the automation around that test case that they're working with and then have the testers then execute the test manually once they get the notification."

Once they do that, you go to the next step and then have the build tool then automate it. At that point, you've got the code being checked in, the build being run. That was probably another step. You go from manual build to a continuous build and then you can go from manual testing to self-triggered automated testing to your build tool triggering your testing. That means then you go from the source check in to the automated build, to the automated test.

Then the final step is continuous deployment. There you're doing continuous integration. Then you go all the way, if you want to, go to continuous deployment all the way out to production when you have the whole build pipeline automated. It's all about first understanding what you're doing, automating each piece from the development team to QA to operations to the deployment and getting everything all going into production.

Eventually you'll get to the point if you keep doing that and building, building, building upon that framework, you will eventually have a pipeline that is the holy grail for many DevOps organizations where a developer checks the code in, a build is automatically generated, goes through the whole process, the testing is done, the unit testing is done, the integration tests, the acceptance tests, all this testing is done. Then it gets put into production and then it gets tested again, and then at that time, that's how high-performance organizations like Facebook and Netflix and Etsy and all these web platforms are able to just release hundreds of times a day to really give value to customers as soon as possible.

Ingo: Exactly, Adam. One common mistake we see when engineering teams go for continuous delivery is that they miss to treat architecture as really an engineering practice. I really would like to know how you see that because when you put an architecture in place, that architecture will change obviously as your product evolves and as your services evolve. The architecture also will change as your business plan changes. It's important, at least from my perspective, that architects are always involved in your continuous delivery process.

Ingo: You won't get it right the first time. This is just the nature of those complex systems as you have mentioned, so the continuous delivery pipeline is useful because it allows you to validate your architecture also early on by constantly running your performance tests, your availability tests. This then also allows you to make changes to your architecture as soon as possible so that you can make sure that you build the right product, that you've built the right architecture.

In my point of view, you should validate and refine also your architecture from early on because those changes are expansive to make late, and you won't make them early. You want to make them when they are cheap to make, and this is what most companies, at least from my naïve perspective, all too often do not consider in the first place. How do you see that?

Adam: Let me explain it a little bit in a different kind of term. I am more of the fan of starting from nothing, delivering that out to production as soon as absolutely possible. If somebody is just starting out and wants to build the pipeline, in my opinion, architecture shouldn't even be a word that you're talking about. It's all about how do I just get something out there? Customers are asking for this feature or we're starting with this new software.

Of course, the architecture and design component of the software development process obviously needs to happen more when you're designing a new piece of software or you need to build this huge feature. In my opinion, I think that one of the big advantages of having a build pipeline is being able to not necessarily worry about the overall architecture of a component or other software system rather than worrying about just get something out there soon as possible.

I think the argument around waterfall versus agile, a lots people are going towards agile and DevOps now just because they're spending way too much time on the planning design architecture discussions ahead of time when they can just say, "Let's just get work done. Just get those features out there, get that new software out there as soon as possible and get feedback from the community." Then that feedback, your architecture plan, if you will, in a rough sense of terms will evolve as it's going on. You start out when you just ... Obviously, there's going to be planning to know what you need to do, but in my opinion, that is trumped by just getting something out there as fast as possible and getting feedback on.

Then once you have that feedback loop going, the customers provide you with the feedback and then they say, "Well I didn't like this component." Then you start your rough overall architectural diagram and say, "Well, okay, then you start that." It's the whole MVP thing to where you just put something out there as soon as possible, and you worry about the overall "how we're going to get there" as you go. It's definitely a different mindset or mentality that I know a lot of people have, and I can't really put my finger on how you would actually teach that kind of mentality. Because a lot of people that I know of, I have to do everything just perfect. I have to get the whole process down and architecture fine and everything designed out perfectly.

Then when they actually get it out there and they actually execute it, they realized it's like they design and spend months and months and months on this great feature of this new product. Then the company itself ... Everybody in the company, "This is great." As soon as they deliver to customers, customers can say, "This is terrible. I don't want this." They then get feedback as they were going to constantly improve upon it.

Ingo: Yeah, I absolutely agree. I think your bottom line is that you shouldn't strive for ultimate perfection. You should rather strive for continuous improvement, and that's right from the beginning, right?

Adam: Correct. Yes. Exactly. Let's learn from the Japanese auto manufacturers.

Ingo: Exactly. This was also the lesson we have learned in the last couple of years while we were building our internal pipeline at Tricentis. What we recognized is that we should mature our pipeline slowly instead of in one go. What we realized that we need to solve our current problems and we need to solve them step by step and if all the other, the theoretical problems for the future.

Adam: Exactly. Don't try to future-proof the pipeline too much right off because the future is going to blow up on you when you least expect it. You can't predict the future. You have no idea what it's going to have in store for you.

Ingo: Yeah, absolutely.

Noel: Well, the last question that I had for you guys was, Adam, from another piece you had done. You wrote, "Get in on the DevOps revolution by freeing yourself from mundane server tasks and tapping your inner coder." That stood out to me as very applicable in software testing in the world as well, and you can leave DevOps in there because we at Tricentis certainly view continuous testing as essential DevOps, but instead of "inner coder," thinking about your inner tester. We think of automation freeing testers from these mundane tasks that are not their passion or likely what they were even really brought in to do. That it's really freeing them to do the more creative, more innovative, testing activities that are really going to contribute a lot more value than just these manual efforts.

Adam: I can start with that. I think that one of the biggest shifts that I have seen lately in just IT, in general, is how companies have, number one, seen the benefits of automation. Everything is being automated this time because they see the definite ... the great ROI that it has. One thing that I have seen some employees do is, and I've read some horror stories about this where they reject automation not only because it totally goes against the way that they've always done things, and it's not very comfortable.

They're not very receptive of change, but they also worry about, "Well, if I'm automating this, I'm automating myself out of the job." That's another mindset that people really need to change because what they don't realize is they should automate themselves out of a job because if they automate themselves out of that job, you're going to show your expertise and you're going to get promoted, or you're going to do more interesting things rather than, "Oh, here comes another build." Click, click, click. "I've got to do this same thing over and over again. It's boring."

I have seen some people just do not want to automate that at all because not necessarily because they don't care about the process in the company and in what they're doing. They're legitimately concerned about, "If I automate all this stuff, I'm not going to have anything to do, and the company is not going to need me anymore." I think that is a very important thing. I think that's one of the biggest resistance to automation that I have seen out there.

Ingo: That's quite funny because that is what I hear constantly, Adam. The sentence, "I'm automating myself out of the job." I simply don't get it. For me, there are two testing cultures out there, and we've previously talked about it before. The first one is formal testing. It's also confirmatory testing. It's all about checking. It's just asking a question. "Pass or fail?" You can purely automate that process because that's all about mindless checking.

There is another dimension to testing, and that is exploratory testing, and it's the second testing culture next to formal testing. To me, exploratory testing is a way of thinking. Much more than it is a body of mechanical checks. Here it is all about figuring out if there is a problem in your product rather than asking if certain assertions pass or fail. That's a subtle difference, and that kind of decision usually requires the application of a variety of different human observations such as questioning, study, modeling, inferences and many, many more.

Exploratory testing is really all about evaluating a product by learning it through exploration and experimentation. It's not about creating test cases; it's about creating test ideas. It's about performing powerful experiments, and this for me implies that exploratory testing is any testing that machines can't get to do because machines, they just check, they do not think. I don't really understand why people are so afraid that they automate themselves out of a job. In that case, I just need to say that they simply don't understand the game that they are playing.

I think they can't envision themselves doing something different than what they're currently doing today. They can't see themselves doing exploratory testing. To them, it's all about the way they've always done and they can't really fathom to do it a different way. I agree completely. We need more people not just being told what to do, being passed instructions and running a bunch of test cases in a checklist.

Adam: We need people to get in there, and like I've never heard of the term exploratory testing, but I definitely agree with that. We need people who can think for themselves. We need human beings that can put two and two together and come up with a test case that was never even thought of in the first place. "Oh, you did this when this did this. How did you know that button linked to this process?" Or to be able to come up and instinctively come up with these test cases. You can create the test cases rather than just being, just like you said, going down through, "Did it work? Fix ... Try this. Did it work?" "No." "Try this. Did it work?" "No, yes, no." That checklist thing. Yeah, you're right. That's exactly what a machine can do.

Ingo: Yeah, absolutely. It's really a pity, to be honest really because the high coding efforts that are caused by test automation, and that's definitely the case. Often those testers, to think that they can't add value to software development outside test automation and they just ... "Well test automation is the only thing I really can add value to software development," and that's not true. I think that it's really too bad to have people like this on the team.

Adam: I guess it's down to the whole silo thing still. That's what DevOps is trying to do, break down the silos. "Well I'm just in this silo," or ... I hate that term. That's a thing I rail against constantly, but people think, "I'm just this person or I'm just on this team. I can't affect this other team in anyway."

What they don't realize is if they're curious enough and they get in there and poke around and not just follow, in this case, the software testing test case, and they see connections, see different scenarios and things that have been come up with, that helps them know the product more, which makes them much better testers, too. They get in and immerse themselves in not only just the test cases that they're doing, but also say, "Oh looks like ..." It just creates ... Look at patterns and just know these patterns in the test cases. "Oh, it looks like we don't have a test case for this." They can then start telling the development team or others like, "Oh, when you do this process, then oh you may want to also do this because these test cases look like they may not cover this specific scenario."

It's all about just thinking for yourself and just coming up with creative solutions to problems.

Ingo: Absolutely.

Automate open source governance at scale across the entire software supply chain with the Nexus Platform. Learn more.

Topics:
automation ,test automation ,devops ,testing ,software delivery ,automated testing

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}