The Testing Cycle

The testing cycle, or software testing life cycle, is a set of processes that testing teams use to deliver continuous quality feedback to the development teams. It helps uncover functional and non-functional mismatches between what the business asked for and what developers delivered. Stages of the testing cycle are based on the agile testing pyramid.


Goal of the Software Testing Life Cycle

The goal of the software testing life cycle is to provide just-in-time feedback to development teams on the quality of the application. The software testing life cycle is designed to expose defects as early as possible for two reasons:

  • To maximize the efficiency of the software development life cycle by providing early feedback to the developers and other stakeholders (shifting testing to the left). The developers can then address problems while the code is still top-of-mind—and before the organization invests significant time and resources in lengthy testing processes. 
  • To minimize the risk of failure when the application eventually goes into production. Furthermore, it gives valuable insight to the organization when the application is running in production and the business is gaining insights by A/B testing and application monitoring from the end user perspective (shifting testing to the right).

Teams can minimize test cycle time by using automation in the CI/CD pipeline while extensive continuous quality checks run in parallel. The result is greater testing efficiency and higher build quality—which can lead to an agile testing approach.


Build upon the Agile Testing Pyramid

Agile Testing Pyramid

The testing cycle is based upon the agile testing pyramid. Each testing stage has its own testing scope, and each stage is designed to find defects as early as possible in increasingly integrated environments.

The testing stage at the base of the pyramid has the largest number of test cases. These test cases execute quickly but also cover the least comprehensive functionalities. The test cases are easy to automate and very resilient.

The testing stage at the top of the pyramid has the smallest number of test cases. These test cases execute more slowly than the test cases at any other stage, but they also cover the most comprehensive functionalities. They are the hardest to automate and the least resilient.

software-testing-life-cycle

When we take a look at how the testing pyramid fits into the testing cycle, we can break it down into the following testing stages:

  • Unit Testing: Designed to validate the behavior of individual functions, methods, and objects—and the interaction between them—within the application.
  • System Testing: Designed to validate the functionality of the application or system in an isolated environment.
  • System Integration Testing: Designed to validate the integration between systems.
  • End-to-End Testing: Designed to validate complete business processes in a production-like environment.
  • User Acceptance Testing: Designed to receive feedback and acceptance from the business on functionality as well as user experience.
  • Production Testing: Designed to validate proper behavior in production during release.

The testing cycle as illustrated above reflects the ideal scenario for a large enterprise organizations. In smaller organizations, the system integration and end-to-end testing stages may be combined into a single testing stage.


Transition From Stage to Stage With Automated Quality Gates

Quality gates are checkpoints that help ensure the software’s quality as it moves through the testing cycle. These checkpoints break down a full testing process that may take a long time to complete into smaller, incremental portions that can provide feedback much more quickly. Using quality gates in the software development life cycle is not a new concept. The key to implementing quality gates is to balance speed and test coverage at each of the testing stages.  

To keep the test cycle time as short as possible, only the test cases that cover the highest business risk are executed as part of so-called smoke tests. If tests are successful, the testing cycle continues. If tests fail, immediate feedback is delivered to the development teams to address the defect and the build is rolled back. Resilience of the automated test cases is essential for this process to succeed; test cases that report false positives (in other words, fail for reasons other than a defect) will undermine the credibility of the results.

Putting quality gates at every stage of development may sound as if it would increase the length of the cycle, but it will actually save time in the long run by enabling organizations to find faulty builds quickly. It reduces the wait time for teams that are blocked from continuing work and it doesn’t waste the time of teams that are testing a faulty build. In fact, because the feedback to the development teams is so fast, the process shouldn’t even deter developers from checking in their code frequently.


The Agile Testing Approach to Continuous Delivery

Agile Testing Approach

Testing within the teams

Within development teams, developers and automation specialists may be designing and redesigning unit tests using concepts such as test-driven development and static code analysis. Developers implement and refactor the code of the application to meet new and revised business requirements. Upon every check-in of the code to the central repository, unit tests of the affected areas will be executed on the local machine of the developer or in a containerized environment. Upon successful results of the unit tests—after every check-in or on a scheduled interval—the continuous delivery cycle builds and deploys the application to an isolated development environment.

Automation specialists collaborate with test analysts to define and design automated system regression tests for functional and non-functional verifications. Test analysts perform risk analysis to constantly redefine the smoke test, regression, and progression portfolio. On top of that, they extend the coverage with exploratory tests.

Upon deployment of a new build to the development environment, the testing cycle triggers automated system smoke tests. These smoke tests contain the highest business impact scenarios of the application and act as an automated quality gate to determine the success of the deployment.

  • On failure, the rollback mechanism recovers the last stable build of the application.
  • On success, the core regression functionality of the system under test has been verified.

Automation specialists and test analysts continue to execute extensive functional and non-functional system tests on the application. At the same time, the application is deployed to the integration environment as a manual or automated next step of the delivery cycle so that system integration testing can begin.


Testing across the teams

A group of automation specialists and test analysts from the individual teams group together as a system team and work on the integration of systems. The system team designs, maintains, and executes automated system integration test cases for functional and non-functional verification of the integration between systems. The smoke, regression, and progression test portfolios are again optimized through business risk analysis, while additional exploratory tests are performed to identify any gaps. The testing cycle continues as soon as a new build is deployed to the integration environment.

Upon deployment, system integration smoke tests are executed to verify that the highest impact integrations between systems are working as intended.

  • On failure, the rollback mechanism recovers the last stable build of the application.
  • On success, the core regression functionality of the integration between systems has been verified.

The system team continues extensive functional and non-functional verification of the integrations. At the same time, the application is deployed to the acceptance environment as a manual or automated next step of the delivery cycle so that end-to-end testing can begin.


Testing across the enterprise

The system team also designs, maintains, and executes automated end-to-end test cases for the functional and non-functional verification of entire business processes. Smoke, regression, and progression test portfolios are optimized based on business risk. Exploratory tests cover any remaining gaps. The testing cycle continues upon deployment of the build to the acceptance environment.

End-to-end smoke tests determine the success of the build and cover the riskiest business processes.

  • On failure, the rollback mechanism restores the last stable build.
  • On success, the system team performs extensive functional and non-functional automated and exploratory tests to further verify the business processes.

At the same time, the business gets involved in the user acceptance testing of the newly delivered functionality and the user experience. The business accepts or rejects the latest changes. This is always a manual process.

In continuous deployment, after successful end-to-end testing, the new build of the application gets deployed automatically to the production environment. In most enterprise organizations, however, the last step in the process is always a manual step. Deployment to production occurs after user acceptance, after extensive testing in all stages, and on set intervals. For production verification, the automated end-to-end smoke tests are reused to verify the successful deployment to production. Feedback is gathered on the user experience of the newly delivered functionality through A/B testing and application monitoring. The feedback is then used as an input for the next iteration of the software development and testing cycle.

The business is constantly involved to provide feedback on newly developed functionality. Meanwhile, operations and automation engineers work on enabling smooth integration of the testing cycle—including its quality gates—into the continuous delivery cycle.

It is not good to have these teams remain blind to the efforts of their colleagues in other teams. To prevent redundant efforts, program and enterprise test architects must orchestrate the communication and collaboration between the teams to open communication channels and break down silos. These test architects should work with any team that is executing tests to make sure each team is reusing the work of other teams, and to limit redundancy in testing scope within the testing cycle.

The continuous collaboration and coordination between developers, test analysts, automation specialists, operations, business, automation engineers and test architects is crucial to protect the testing cycle from breaking.