As I've mentioned in previous posts, this year I'm reading one testing-related book a month and reviewing it in my blog. This month I read Enterprise Continuous Testing, by Wolfgang Platz with Cynthia Dunlop.
This book aims to answer solve the problems often found in continuous testing. Software continuous testing is defined by the author as "the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release as rapidly as possible". Platz writes that there are two main problems that companies encounter when they try to implement continuous testing:
1. The speed problem
Testing is a bottleneck because most of it is still done manually
Automated tests are redundant and don't provide value
Automated tests are flaky and require significant maintenance
2. The business problem
The business hasn't performed a risk analysis on their software
The business can't distinguish between a test failure that is due to a trivial issue and a failure that reveals a critical issue
I have often encountered the first set of problems, but I never really thought about the second set. While I have knowledge of the applications I test and I know which failures indicate serious problems, it never occurred to me that it would be a good idea to make sure that product managers and other stakeholders can look at our automated tests and be able to tell whether our software is ready to be released.
Fortunately, Platz suggests a four-step solution to help ensure that the right things are tested, and that those tests are stable and provide value to the business.
Step One: Use risk prioritization
Risk prioritization involves calculating the risk of each business requirement of the software. First, the software team, including the product managers, should make a list of each component of their software. Then, they should rank the components twice: first by how frequently the component is used, and second by how bad the damage would be if the component didn't work. The two rankings should be multiplied together to determine the risk prioritization. The higher the number is, the higher the risk; higher risk items should be automated first, and those tests should have priority.
An example of a lower-risk component in an e-commerce platform might be the product rating system: not all of the customers who use the online store will rate the products, and if the rating system is broken, it won't keep customers from purchasing what's in their cart. But a higher-risk component would be the ability to pay for items with a credit card: most customers pay by credit card, and if customers can't purchase their items, they'll be frustrated and the store will lose revenue.
Step Two: Design tests for efficient test coverage
Once you've determined which components should be tested with automation, it's time to figure out the most efficient way to test those components. You'll want to use the fewest tests possible to ensure good risk coverage. This is because the fewer tests you have, the faster your team will get feedback on the quality of a new build. It's also important to make sure that each test makes it very clear why it failed when it fails. For example, if you have a test that checks that a password has been updated, and also checks that the user can log in, when the test fails you won't know immediately whether it has failed on the password reset or on the login. It would be better to have two separate tests in this case.
Platz advocates the use of equivalence classes: this is a term that refers to a range of inputs that will produce the same result in the application. He uses the example of a car insurance application: if an insurance company won't give a quote to a driver who is under eighteen, it's not necessary to write a test with a driver who is sixteen and a driver who is seventeen, because both tests will test the same code path.
Step Three: Create automated tests that provide fast feedback
Platz believes that the best type of automated tests are API tests, for two reasons: one, while unit tests are very important, developers often neglect to update them as a feature changes, and two, UI tests are slow and flaky. API tests are more likely to be kept current because they are usually written by the software testers, and they are fast and reliable. I definitely agree with this assessment!
The author advises that UI tests should be used only in cases where you want to check the presence of or location of elements on a webpage, or when you want to check functionality that will vary by browser or device.
Step Four: Make sure that your tests are robust
This step involves making sure that your tests won't be flaky due to changing test data or unreliable environments. Platz suggests that synthetic test data is best for most automated tests, because you have control over the creation of the data. In the few cases where it's not possible to craft synthetic data that matches an important test scenario, masked production data can be used.
In situations where environments might be unreliable, such as a component that your team has no control over that is often unavailable, he suggests using service virtualization, where responses from the other environment are simulated. This way you have more control over the stability of your tests.
Enterprise Continuous Testing is a short book, but it is packed with valuable information! There are many features of the book that I didn't touch on here, such as metrics and calculations that can help your team determine the business value of your automation. I highly recommend this book for anyone who wants to create an effective test automation strategy for their team.
"Perfect Software and Other Illusions About Testing", by Gerald Weinberg, is the best book on testing I have ever read. It is a m...
This week, we'll be talking about adding assertions to our Postman requests. In last week's post , we discussed what various API re...
As software becomes increasingly complex, more and more companies are turning to APIs as a way to organize and manage their application'...
The concept of measuring quality can be a hot-button topic for many software testers. This is because metrics can be used poorly; we've...