Can AI Redefine Software Testing?

Artificial intelligence(AI) is redefining the software industry as we know it. From improving the intelligence of mainstream technologies such as databases to driving new technology trends such as bots, AI is becoming a foundational piece of the new generation of software technologies. One area that I am particularly intrigued about the potential of AI is software testing.

If we oversimplify our view of the software building process, we can identify two main groups: creation and testing. From those two groups, I believe the current stage of AI technologies have the opportunity to drastically improve software testing and make it more intelligent. From the AI standpoint, software creation require capabilities such as creativity, intuition or imagination that are still in very early stages in AI technologies. Software testing, however, is mostly driven by th ecreation adn execution of data driven rules. That’s precisely the type of scenarios that can immediately benefit from AI technologies in its current form.

How can AI Improve Software Testing?

If we try to dissect software testing processes today, we can argue that is hasn’t changed tremendously in the last 20 years. Despite the rapid changes in software technology, the software testing methodologies haven’t evolved that much.

From a conceptual standpoint, software testing processes are based on defining a set of rules that validate well-known uses cases on a specific software solutions. I know I am oversimplifying a bit but almost every testing methodology can be modeled as a variation of that approach. The introduction of AI as a complement of a test engineer can bring some interesting benefits to those processes.

Formulating New Test Cases

Software testing stacks can leverage AI to create new test cases based on an initial set of observations about the runtime behavior of the software. In this approach, testers will train AI algorithms on the expected behavior of a software solution based on an initial set of tests and runtime data points. After that, AI testing algorithms can leverage runtime information to formulate new test cases that better resemble the production behavior of the software.

Inducing Intelligent Failures

Chaos Monkey is a very popular testing methodology that causes random failures in a software system. AI technologies can improve chaos monkey models by understanding the data describing the runtime behavior of the different components of a software as well as the hidden dependencies between them.

Software Behavior Simulation

Simulation takes several AI concepts applied to software testing to a complete different level. Using AI technologies, we could create simulation models that represent the future runtime behavior of a software system based on a well defined set of scenarios.

Predictive Failure

The objective of software testing is to validate the behavior of a software solution using a well-known set of rules [test cases]. AI can use the results of tests to not only validate the functioning of a software but also gather insights and understanding about the current state of a software solution. By gaining that level of intelligent, an AI testing stack could accurately predict conditions that will cause failures in the software and recommend appropriate solutions.

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store