Email Subscription Form

Saturday, November 10, 2018

Five Strategies for Managing Test Automation Data

Has this ever happened to you?  You arrive at work in the morning to find that many of your nightly automated tests have failed.  Upon investigation, you discover that your test user has been edited or deleted.  Your automation didn't find a bug, and your test isn't flaky; it simply didn't work because the data you were expecting wasn't there.  In this week's post, I'll take a look at five different strategies for managing test data, and when you might use each.


Strategy One: Using data that is already present in the system

This is the easiest strategy- there's nothing to do for setup- but it is also the most risky.  Even if you label your user with "DO NOT REMOVE" there's always a chance that some absent-minded person will delete it.  

However, this strategy can work well if you are just making simple requests.  For example, if you are testing getting a list of contacts, you can assert that contacts were returned.  For the purposes of your test, it doesn't matter what contacts were returned; you just need to know that some contacts were returned.  

Strategy Two: Updating or creating data as a setup step

Most automated test platforms offer the ability to create a setup step that either runs before each test or before a suite of tests.  This strategy works well if it's easy to create or update the record you want to use.  I have a suite of automated API tests that test adding and updating a user's contact information.  Before the tests begin, I run requests that delete the user's email addresses and phone numbers.  

The downside to this strategy is that sometimes my requests to delete the user's contact information fail.  When this happens, my tests fail.  Also, updating data as a setup step adds more time to the test suite, which is something to consider when you need fast results.

Strategy Three:  Using test steps to create and delete data

This is a good strategy when you are testing CRUD (Create, Read, Update, Delete) operations, because you can use the actual tests to create and delete your test data.  If I was testing an API for a contact list, for example, I would have my first test create the contact and assert that the contact was created.  Then I would update the contact and assert that the contact was updated.  Finally, I would delete the contact and assert that the contact was deleted.  There is no impact to the database, because I am both creating and destroying the data.  

However, if one of the tests fails, it's likely the others will as well.  If for some reason the application was unable to create the contact, the second test would fail, because there would be nothing to update.  And the third test would fail because the record would not exist to be deleted.  So even though there was only one bug, you'd have three test failures.

Strategy Four:  Taking a snapshot of the database and restoring it after the tests

This strategy is helpful when your tests are doing a lot of data manipulation.  You take a snapshot of the database as a setup step for the test suite.  Then you can manipulate all the data you want, and as a cleanup step, you restore the database to its original state.  The advantage to this method is that you don't need to write a lot of steps to undo all the changes to your data.  

But this method relies on having the right data there to begin with.  For instance, if you are planning to do a lot of processing on John Smith's records, and someone happened to delete John Smith before you ran your tests, taking a snapshot of the database won't help; John Smith simply won't be there to test on.  It's also possible that taking a snapshot will be time-consuming, depending on the size of your database.

Strategy Five: Creating a mini-database with the data you need for your tests

In this strategy, you spin up your own database with only the data you need for testing, and when your tests have finished, you destroy the database.  If you are using Microsoft technologies, you could do this with their DACPAC functionality; or you are using Docker, you could create your own database as part of your Docker instance.  With this strategy, there is no possibility of your data ever being incorrect, because it is always brand-new and exactly how you configured it.  Also, because your database will be smaller than your real QA environment database, your tests will likely execute more quickly.  

The downside to this strategy is that it requires a lot of preparation.  You may have to do a lot of research on how your data tables relate to each other in order to determine what data you need.  And you'll need to do a fair amount of coding or configuration to set up the creation and destruction steps.  But in a situation where you want to be sure that your data is right for testing, such as when a developer has just committed new code, this solution is ideal.

All of these strategies can be useful, depending on your testing needs.  When you evaluate how accurate you need your data to be, how likely it is that it will be altered by someone else, how quickly you need the tests to run, and how much you can tolerate the occasional failure, it will be clear which strategy to choose.  


Saturday, November 3, 2018

What to Put in a Smoke Test

The term "smoke test" is usually used to describe a suite of basic tests that verify that all the major features of an application are working.  Some use the smoke test to determine whether a build is stable and ready for further testing.  I usually use a smoke test as the final check in a deploy to production.  In today's post, I'll share a cautionary tale about what can happen if you don't have a smoke test.  Then I'll continue that tale and talk about how smoke tests can go wrong.



Early in my testing career, I worked for a company that had a large suite of manual regression tests, but no smoke test.  Each software release was difficult, because it was impossible to run all the regression tests in a timely fashion.  With each release, we picked which tests we thought would be most relevant to the software changes and executed those tests.

One day, in between releases, we heard that there had been a customer complaint that our Global Search feature wasn't working.  We investigated and found that the customer was correct.  We investigated further and discovered that the feature hadn't worked in weeks, and none of us had noticed.  This was quite embarrassing for our QA team!

To make sure that this kind of embarrassment never happened again, one of our senior QA engineers created a smoke test to run whenever there was a release to production.  It included all the major features, and could be run fairly quickly.  We felt a lot better about our releases after that.

However, the tester who created the test kept adding test steps to the Smoke Test.  Every time a new feature was created, a step was added to the smoke test.  If we found a new bug in a feature, even it was a small one, a step checking for the bug was added to the smoke test.  As the months went on, the smoke test took longer and longer to execute and became more and more complicated.  Eventually the smoke test itself took so much time that we didn't have time to run our other regression tests.

Clearly there needs to be a happy medium between having no smoke test at all, and having one that takes so long to run that it's no longer a smoke test.  In order to decide what goes in a smoke test, I suggest asking these three questions:

1. What would absolutely embarrass us if it were broken in this application?

Let's use an example of an e-commerce website to consider this question.  For this type of website, it would be embarrassing or even catastrophic if a customer couldn't:
  • search for an item they were looking for
  • add an item to their cart
  • log in to their account
  • edit their information
So at the very least, a smoke test for this site should include a test for each of these features.

2. Is this a main feature of the application?

Examples of features in an e-commerce website that would be main features, but less crucial ones, might be:
  • wish list functionality
  • product reviews
  • recommendations for the user
If these features were broken, it wouldn't be catastrophic, but they are features that customers expect.  So a test for each one should be added.

3. If there was a bug here, would it stop the application from functioning?

No one wants to have bugs in their application!  But some bugs are more important than others.  If the e-commerce website had an issue where their "Add to Cart" button was off-center, it might look funny, but it wouldn't stop customers from shopping.  

But a bug where a customer couldn't remove an item from their cart might keep them from checking out with the items they want, which would affect sales.  So a test to check that items can be removed from a cart would be important in a Smoke Test.

With these questions in mind, here is an example of a smoke test that could be created for an e-commerce site:

1. Log in
2. Verify product recommendations are present
3. Do a search for a product
4. Read a review of a product
5. Add an item to the cart
6. Add a second item to the cart and then delete it
7. Edit customer information
8. Check out
9. Write a review

A smoke test like this wouldn't take very long to execute manually, and it would also be easy to automate.  

Whenever new features are added to the application, you should ask yourself the first two questions to determine whether a test for the feature should be added to the smoke test.  And whenever a bug is found in the product, you should ask yourself the third question to determine whether a test for that issue should be added to the smoke test.

Because we want our applications to be of high quality, it's easy to fall into the trap of wanting to test everything, all the time.  But that can create a test burden that keeps us so busy that we don't have time for anything else.  Creating a simple, reliable smoke test can free us up for other activities, such as doing exploratory testing on new features or creating nightly automated tests.  

Five Strategies for Managing Test Automation Data

Has this ever happened to you?  You arrive at work in the morning to find that many of your nightly automated tests have failed.  Upon inves...