Saturday, October 13, 2018

The Power of Pretesting

Having been in the software testing business for a few years now, I've become accustomed to various types of testing: Acceptance Testing, Regression Testing, Exploratory Testing, Smoke Testing, etc.  But in the last few weeks, I've been introduced to a type of testing I hadn't thought of before: Pretesting.

On our team, we are working to switch some automatically delivered emails from an old system to a new system.  When we first started testing, we were mainly focused on whether the emails in the new system were being delivered.  It didn't occur to us to look at the content of the new emails until later, and then we realized we had never really looked at the old emails.  Moreover, because the emails contained a lot of detail, we found that we kept missing things: some extra text here, a missing date there.  We discovered that the best way to prevent these mistakes was to start testing before the new feature was delivered, and thus, Pretesting was born.

Now whenever an email is about to be converted to the new system, we first test it in the old system.  We take screenshots, and we document any needed configuration or unusual behavior.  Then when the email is ready for the new system, it's easy to go back and compare with what we had before.  This is now a valuable tool in our testing arsenal, and we've used it in a number of other areas of our application.



When should you use Pretesting?
  • when you are testing a feature you have never tested before
  • when no one in your company seems to know how a feature works
  • when you suspect that a feature is currently buggy
  • when you are revamping an existing feature
  • when you are testing a feature that has a lot of detail

Why should you Pretest?

Pretesting will save you the headache of trying to remember how something "used to work" or "used to look".  If customers will notice the change you are about to make, it's good to make note of how extensive the change will be.  If there are bugs in the existing feature, it would be helpful to know that before the development work starts, because the developer could make those bug fixes while in the code.  Pretesting is also helpful for documenting how the feature works, so you can share those details with others who might be working on or testing the feature.

How to Pretest:

1. Conduct exploratory testing on the old feature.  Figure out what the Happy Path is.
2. Document how the Happy Path works and include any necessary configuration steps
3. Continue to test, exploring the boundaries and edge cases of the feature
4. Document any "gotchas" you may find- these are not bugs, but are instead areas of that feature that might not work as someone would expect
5. Log any bugs you find and discuss the with your team to determine whether they should be fixed with the new feature or left as is
6. Take screenshots of any complicated screens, such as emails, messages, or screens with a lot of text, images, or buttons. Save these screenshots in an easily accessible place such as a wiki page.

When the New Feature is Ready:

1. Run through the same Happy Path scenario with the new feature, and verify that it behaves the same way as the old feature
2. Test the boundaries and edge cases of the feature, and verify that it behaves in the same way as the old feature
3. Verify that any bugs the team has decided to fix have indeed been fixed
4. Compare the screenshots of the old feature with the screenshots of the new feature, and verify that they are the same (with the exceptions of anything the team agreed to change)
5. Do any additional testing needed, such as testing new functionality that the old feature didn't have or verifying that the new feature integrates with other parts of the application

The power of Pretesting is that it helps you notice details you might otherwise miss in a new feature, and as a bonus, find existing bugs in the old feature.  Moreover, testing the new feature will be easy, because you will have already created a test plan.  Your work will help the developer do a better job, and your end users will appreciate it!








Saturday, October 6, 2018

Automating Tests for a Complicated Feature

Two weeks ago, I introduced a hypothetical software feature called the Superball Sorter, which would sort out different colors and sizes of Superballs among four children.  I discussed how to create a test plan for the feature, and then last week I followed up with a post about how to organize a test plan using a simple spreadsheet.  This week I'll be describing how to automate tests for the feature.

In case you don't have time to view the post where I introduced the feature, here's how it works:

  • Superballs can be sorted among four children- Amy, Bob, Carol, and Doug
  • The balls come in two sizes: large and small
  • The balls come in six colors: red, orange, yellow, green, blue, and purple
  • The children can be assigned one or more rules for sorting: for example, Amy could have a rule that says that she only accepts large balls, or Bob could have a rule that says he only accepts red or orange balls
  • Distribution of the balls begins with Amy and then proceeds through the other children in alphabetical order, and continues in the same manner as if one was dealing a deck of cards
  • Each time a new ball is sorted, distribution continues with the next child in the list
  • The rules used must result in all the balls being sortable; if they do not, an error will be returned
  • Your friendly developer has created a ball distribution engine that will create balls of various sizes and colors for you to use in testing


When automating tests, we want to keep our tests as simple as possible.  This means sharing code whenever we can.  In examining the manual test plan, we can see that there are three types of tests here:
  • A test where none of the children have any rules
  • Tests where all the children have rules, but the rules result in some balls not being sortable
  • Tests where one or more children have rules, and all of the balls are sortable

We can create a separate test class for each of these types, which will make it easy for us to share code inside each class.  We'll also have methods that will be shared among the classes:

child.deleteBalls()- this will clear all the distributed balls, getting ready for the next test

child.deleteRules()- this will clear out all the existing rules, getting ready for the next test

distributeBalls(numberOfBalls)- this will randomly generate a set number of balls of various sizes and colors and distribute them one at a time to the children, according to the rules

verifyEvenDistribution(numberOfBalls)- this is for scenarios where none of the children have rules; it will take the number of balls distributed and verify that each child has one-fourth of the balls

child.addRule(Size size, Color color)- this will set a rule for a child; each child can have more than one rule, and either the size or color (but not both) can be null

child.verifyRulesRespected()- for the specified child, this will iterate through each ball and each rule and verify that each ball has respected each rule

child.addFourthRuleAndVerifyError(Size size, Color color)- for the tests in the InvalidRules class, it will always be the fourth child's rules that will trigger the error, because it's only with the fourth child that the Sorter realizes that there will be balls that can't be sorted.  So this method will assert that an error is returned.

Each test class should have setup and cleanup steps to avoid repetitive code.  However, I would use child.deleteBalls() and child.deleteRules() for each child in both my setup and cleanup steps, just in case there is there is an error in a test that causes the cleanup step to be missed.

For the DistributionWithNoRules test class, each test would consist of:

distributeBalls(numberOfBalls);
verifyEvenDistribution(numberOfBalls);

All that would vary in each test would be the number of balls passed in.

For the DistributionWithRules test class, each test would consist of:

child.AddRule(Size size, Color color); --repeated as many times as needed for each child
distributeBalls(numberOfBalls);
child.verifyRulesRespected(); --repeated for each child that has one or more rules

Finally, for the InvalidRules test class, each test would consist of:

child.AddRule(Size size, Color color); --repeated as many times as needed for the first three children
child.addFourthRuleAndVerifyError(Size size, Color color); --verifying that the fourth rule triggers an error

The nice thing about organizing the tests like this is that it will be easy to vary the tests.  For example, you could have the DistributionWithNoRules test class run with 40, 400, and 800 balls; or you could set up a random number generator that would generate any multiple of four for each test.

You could also set up your DistributionWithRules and InvalidRules test classes to take in rule settings from a separate table, varying the table occasionally for greater test coverage.

Astute readers may have noticed that there are a few holes in my test plan:

  • how would I assert even distribution in a test scenario where no children have rules and the number of balls is not evenly divisible by four?
  • how can I show that a child who doesn't have a rule is still getting the correct number of balls, when there are one or more children with a rule?  For example, if Amy has a rule that she only gets red balls, and Bob, Carol, and Doug have no rules, how can I prove that Bob, Carol and Doug get an even distribution of balls?
  • will the Superball Sorter work with more or less than four children? How would I adjust my test plan for this?
I'll leave it to you to think about handling these things, and perhaps I will tackle them in future posts.

I've had a lot of fun over the last few weeks creating the idea of the Superball Sorter and thinking of ways to test it!  I've started to actually write code for this, and someday when it's finished I will share it with you.

The Power of Pretesting

Having been in the software testing business for a few years now, I've become accustomed to various types of testing: Acceptance Testing...