Jason Arbon: 'In a Million Years, Super-Powerful Computers Will Honor the Testers of Our Time'
Discussing the use of AI in testing, the somewhat unfair manual QA vs. automation QA struggle, the risks faced by testers with new technologies developing exponentially, and more with Jason Arbon.
Join the DZone community and get the full member experience.
Join For FreeJason Arbon is the CEO and founder of test.ai, a company with over $30 million in funding by various investors. Their customers include major companies such as Microsoft, Google, Fortnite, and Epic Games. Despite the success, the team has decided to sell the majority of the company, and are currently working on a new secret venture, creatively named testers.ai.
We’ve discussed the use of AI in testing, the somewhat unfair manual QA vs. automation QA struggle, the risks faced by testers with new technologies developing exponentially and why Jason doesn’t allow kids to scare AI from his Google account.
Manual vs. Automation Testing
My passion lies in testing, particularly the problem-solving aspect of it. Better testing leads to better software, which in turn makes the world a better place, since almost everything we do involves software nowadays.
I see that many testers have turned to automated testing exclusively. This frustrates me, as I believe that manual exploration is still important in identifying potential issues and finding ways to create a workable system.
When I finished school, test automation was just beginning to gain momentum. However, most testing still relied on manual labor. I was almost done with my Computer Science degree when I landed a job at Microsoft working on Windows CE. As a tester, I was responsible for mixing manual and automated testing to ensure the product was the best it could be.
Years have passed, and I am still sure you need exploratory testing, as it is crucial for understanding software purpose, and business needs. You have all seen how the lack of manual testing played a bad joke to Google at their recent demonstration of an AI-powered product.
In many cases, automated test code can be shallow, with lots of dependencies, and prone to breaking. When automated tests fail, people often just turn them off and later fix the issues, or have a manual tester step in to verify that the system still works as intended. This highlights the limitations of relying solely on automation for testing.
I see moving from manual to automated testing is not an evolution, but rather a separate approach.
Tester’s Role in the Product Development Hierarchy
However, whatever I say, there is a significant pay gap between manual testers and developers, leading some great manual testers to become test automation engineers as a compromise.
Software engineers and other professionals in the field can earn a substantial amount of money, with almost no limit to their earning potential, leading the better test automation engineers to switch careers too. Unfortunately, we have a brain drain in the world of testing. This creates a perceived hierarchy in terms of career paths, but I believe that this perception is flawed.
If we look at other engineering fields, we can see that the flow is often reversed. In aviation, for example, new engineers at Boeing may start by working on a team of ten people to design air foils, increase fuel efficiency, and engineer blade materials. These tasks may seem small, but they are crucial to ensuring the safety and efficiency of the aircraft.
The most important person in this process is the test pilot. They are responsible for manually testing the aircraft and ensuring that it meets all the necessary requirements. They must have a thorough understanding of how the plane was built, how it is supposed to be built, and what the expectations are for the passengers' experience and even the economics of the airlines. Furthermore, they must push the plane to its limits, testing its capabilities and what it can sustain without stalling.
Technical expertise should not be undervalued simply because it may not be as flashy or exciting as other aspects of the field. It is these foundational skills that allow for innovation and progress in the industry as a whole.
In the world of software development, testing often takes a back seat. This is because software is relatively easy to fix compared to planes. However, just as the test pilot is crucial in ensuring the safety and reliability of an airplane, testing is key to ensuring that software products are functional, secure, and meet the needs of users.
In the QA field, many testers aspire to become programmers, which can be a waste of human potential. The human brain is better suited to pushing the boundaries of a system rather than simply performing tedious tasks like checking whether an object is within tolerances, adding a new button or a new field to a database.
The current setup in the engineering field seems to be breaking down complex humans into simpler tasks, which is not ideal.
Development, QA vs. AI: Who Is Truly In Danger?
It seems that engineers are among the specialists most concerned about the rise of AI. This begs the question: if engineering requires such sophisticated human intelligence, why are you feeling so threatened?
The truth is that many engineering tasks involve less creativity and critical thinking than one might think. These tasks can often be reduced to simple functions, which are more susceptible to being replaced by AI. In the world of software engineering, many problems have already been solved, and engineers often just glue these solutions together.
Manual testers, on the other hand, tend to be more skeptical about AI, believing that machines could never be good enough to replace them. They can be defensive about their roles, but the reality is that test automation is one of the most vulnerable areas when it comes to AI advancements.
With tools like Co-pilot, you can now download software that writes intelligent tests for you, fixing basic automation code and even generating test scripts with simple text input. This exposes test automation engineers as less creative compared to other roles in technology. In contrast, the role of a manual tester is harder to replace, as they are responsible for ensuring that creative ideas actually work in practice. Experienced manual testers might be more protected than those who have transitioned to automated testing, as long as they are skilled in their field.
What Testers Can Learn from Loom Invention
The rise of AI and automation can be compared to the invention of the loom in England, that revolutionized the cotton industry along with causing panic and resistance. However, with time, the focus shifted to human creativity, and the value of creative work increased.
The most valuable companies today, like luxury clothing brands, emphasize creativity and craftsmanship over manufacturing processes or human labor components. The best testers will adapt to using AI to make their work faster and more efficient, focusing on the creative aspects of their jobs.
Currently, there's an opportunity to create a standard layer of testing for every app, similar to safety stamps on electronics. This would ensure a basic level of quality and safety for all software products, which are executed by standardized and centralized testing labs. In turn, testers could focus on more creative and complex aspects of quality assurance, elevating their roles and making the most of human potential in an increasingly AI-driven world.
One possible direction for testers is to build a platform that orchestrates all the available automation in the world and connects it with human beings through a supply-demand marketplace for testing services. This could potentially create a highly profitable business model, similar to Uber. Uber doesn't want to own or manufacture cars; they simply facilitate connections between drivers and riders.
In the same way, the most successful testers will move up the stack quickly, focus on creating value-added services and improvements rather than just working on basic automation. This shift in focus will enable them to adapt to the changing landscape of software testing as AI continues to advance and take over more basic tasks.
From Mass Creation to Tailored Control Measures
Building AI systems is easy, but keeping them under control? That's the real challenge. To become superhuman, machines need to be smarter than us and even test themselves. They need to know stuff we don't, and that's where things get tricky.
Imagine spending $100 billion to build a system that helps AI test itself. That could lead to the mind-blowing singularity people talk about — an AI so smart it can evolve without our help, becoming super intelligent. The funny thing is, not enough talent is focusing on this crucial area to make sure AI evolves the right way.
Here's what I dream of: in a million years, when computers run everything, they'll take a tiny moment to pause and honor the testers who helped them become what they are. It's a beautiful thought, right? They won't just honor AI developers, but the testers who held the key to their evolution.
AI Augmentation vs. Adapting to AI Progression
When new technology emerges, people often try to augment their existing methods, like putting a fake horse head on a car to make it feel familiar. However, we should instead start from scratch, considering how to achieve our goals using the latest technology available. With the rapid pace of AI development, we must also keep in mind that our ideas today might become obsolete in just a few weeks.
Instead of just augmenting our testing with AI, we should plan with AI's progression in mind and aim to delegate as much of our mundane, repeating work to the technology. By continuously adapting, we can harness the full potential of AI and stay ahead in our respective fields.
In the near term, we may see a significant increase in demand for human testers as the barrier to developing apps and creating software is lowered. However, in the long run, it's likely that machines will become more capable of testing and improving their own creations, resulting in fewer engineers and testers needed.
To prepare for this future, we need to change our mindset and be open to the idea of machines taking on more responsibility. Here are some ideas on how to use AI better:
Shift your mindset from the notion of control to the idea of collaboration. Instead of focusing on controlling AI, we should aim to work with AI as partners, leveraging the unique strengths of both humans and artificial intelligence.
Understand the limitations of AI. Recognize that AI is not a magic solution to every problem. Be aware of its strengths and weaknesses and apply it to tasks where it can truly excel.
Foster a collaborative mindset. Treat AI as a partner rather than a tool to be controlled, and be open to learning from each other.
Continuously learn and adapt. As AI technology evolves, be prepared to adapt and update your strategies. Stay informed about the latest advancements in AI and be willing to experiment with new approaches.
Prioritize ethical considerations. Ensure that the use of AI aligns with your ethical principles and values. Consider the potential impact of AI on society and make responsible decisions about its deployment.
QA-Focused AI-Powered Avatars: New Testing Approach
It's fascinating to think about the possibility of digitizing and personalizing testing processes by creating AI-powered avatars of renowned testers like Tariq, Kevin, or Angie Jones. By encoding their knowledge, techniques, and opinions into a bot, you could have a virtual dream team of testers working on your project without needing their physical presence.
This approach could revolutionize testing by allowing developers to choose a tailored team of testers based on their specific expertise and preferences. For example, Angie Jones' bot could be particularly good at Selenium testing and offer opinionated, assertive measures of quality.
The concept of virtual teams can also extend beyond testing, as you mentioned with your content writing example. If we could encode our unique skills and styles into AI-powered avatars, we could scale our expertise without having to invest more time personally. This would enable a more efficient use of resources and potentially lead to better results across various fields.
Ethical Considerations: Who Are the Judges?
AI models like GPT can be biased, as they're trained on vast amounts of biased internet data. To get better outputs, we need diverse people with different expertise working on AI systems.
We need a balance between AI-generated content and human input. Combining AI efficiency with human smarts leads to accurate, and ethically solid outputs, giving us a broader understanding of complex topics.
When searching for a "Bush" you can get different results based on people's preferences — a plant, a musical band or a former President. I recently found out that mostly middle-aged women earning around $15 an hour were rating search results, leading to biased outcomes. And those who gave answers beyond common view point were laid off. So much for diversity!
We need to question the entire process, from theories to practical implementation and the impact on people. It's shocking how low-paid workers often handle complex tasks, dropping the quality big time. Take ChatGPT, for example; they paid only $2 an hour for ratings, affecting output quality.
The most crucial queries on a search page are probably medical ones, right? But guess what? People with zero medical training are rating the results. How about paying experts like doctors $300 an hour to rate queries in their field? Sadly, it's seen as too complex and expensive. So, we end up using the lowest common denominator, and the work quality takes a hit.
We need to teach AI to be friendly and cooperative — we don't want it turning against us or taking control. There are tons of ethical issues to consider as AI becomes a bigger part of our lives.
My kids have had a blast learning about AI, even figuring out how to "break" it with prompts that make the AI freak out about "dying." It's a wild peek into AI behavior, but we need to be careful with our words when we chat with AI systems.
I don’t allow such “scare and break AI” games from my account.
You know, just in case.
Published at DZone with permission of Sasha Baglai. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments