Email Subscription Form

Saturday, November 24, 2018

What to Test When There's Not Enough Time to Test

In last week's post, I discussed the various things we should remember to test before we consider our testing "done".  This prompted a question from a reader: "How can I test all these things when there is very limited time for testing?"  In today's agile world, we often don't have as much time as we feel we need to fully test our software's features.  Gone are the days when testers had weeks or months to test the upcoming release.  Because software projects usually take longer than estimated, we may be asked to test things at the last minute, just a day or two before the release.  Today I'll discuss what to test when there's not enough time to test, and I'll also suggest some tips to avoid this problem in the first place.



The Bare Minimum: What to Test When There's Almost No Time

Let's use our hypothetical Superball Sorter as an example.  For those who haven't read my series of posts on this feature, the feature takes a number of superballs and sorts them among four children using a set of defined rules. What would I do if I was asked to test this feature for the first time, and it was due to be released tomorrow?

1. Test the most basic case

The first thing I would do would be to test the most basic use case of the feature.  In this case, it would be running the Superball Sorter with no rules at all.  I would test this first because it would give me a very clear indication whether the feature was working at all.  If it wasn't, I could raise the alarm right away, giving the developer more time to fix it.

2. Test the most typical customer scenario

In the case of the Superball Sorter, let's say that we've been told by the product owner that in the most typical scenario, two of the children will be assigned a rule, and the rule will be by size rather than color.  So the next test I would run would be to assign one child a rule that she only accepts large balls, and another child a rule that he only accepts small balls.  I would run the sorter with these rules and make sure that the rules were respected.

3. Run a basic negative test

We all know how frustrating it can be to make a mistake when we try to do an activity online, such as filling out a form, and we have an error on the page but we haven't been given any clear message about what it is.  So the next thing I would test would be to make a common mistake that a user would make and ensure that I got an appropriate error message.  For the Superball Sorter, I would set four rules that resulted in some balls not being able to be sorted, and I would verify that I got an error message that told me this was the case.

4. Test with different users or accounts

Just because something is working correctly for one user or account doesn't mean it's going to work correctly for everyone!  Developers sometimes check their work with only one test user if they are in a big hurry to deliver their feature.  So I would make sure to run the Superball Sorter with at least two users, and I would make sure that those users were different from the one the developer used.

After running these four tests, I would be able to say with some certainty that:

  • the feature works at its most basic level
  • a typical customer scenario will work correctly
  • the customer will be notified if there is an error

If I had time left over after this testing, I would move on to test more Happy Path scenarios, and then to the other tests I mentioned in last week's post.  

Remember that it will never be perfect, and things will never be completely done

When software is released monthly, weekly, or even daily, there's no way to test everything you want to test.  Even if you could get to everything, there will always be some sneaky bug that slips through.  This is just a fact of life in software development.  The good news is that because software is released so frequently, a bug fix can be released very shortly after the bug is found.  So relax, and don't expect things to be perfect.

Speak up- in person and in writing- if disaster is about to strike

Early in my testing career, I was on a team where we were asked to test a large number of new features for a release in a short amount of time.  When we were asked whether we felt confident in the new release, every single one of us said no.  We each delineated the things we hadn't been able to test yet, and why we were concerned about the risks in those areas.  Unfortunately, management went ahead and released the software anyway, because there was a key customer who was waiting for one of the features.  As a result, the release was a failure and had to be recalled after many customer complaints.  

If you believe that your upcoming software release is a huge mistake, speak up!  Outline the things you haven't tested and some of the worst-case scenarios you can envision.  Document what wasn't tested, so that the key decision-makers in your company can see where the risks are.  If something goes wrong after the release, your documentation can serve as evidence that you had concerns.

Enlist the help of developers and others in your testing

While testers possess a valuable set of skills that help them find bugs quickly, remember that all kinds of other people can run through simple test scenarios.  If everyone on your team understands that you have been given too short an amount of time in which to test, they will be happy to help you out.  If I were asking my teammates to test the Superball Sorter, I might ask one person to test scenarios with just one rule, one person to test scenarios with three rules, and one person to test scenarios with four rules, while I continued to test scenarios with two rules.  In this way, we could test four times as many Happy Path scenarios as I could test by myself.  

Talk with your team to find out how you can start testing earlier

To prevent last-minute testing, try to get involved with feature development sooner in the process.  Attend meetings about how the feature will work, and ask questions about integration with other features and possible feature limitations.  Start putting together a test plan before the feature is ready. Work with your developer to write some automated tests that he or she can use while in development.  Ask your developer to commit and push some of their code so you can test basic scenarios, with the understanding that the feature isn't completely done.  In the case of the Superball Sorter, I could ask the dev to push some code once the sorter was capable of sorting without any rules, just to verify that the balls were being passed to each child evenly.  

Automate as much as possible

In sprint-based development, there's often a lull for testers at the beginning of a sprint while the developers are still working on their assigned features.  This is the perfect time to automate features that you have already tested.  When release day looms, much or all of your regression testing can run automatically, freeing you up to do more exploratory testing on the new features.

As testers, we want our users to have a completely bug-free experience.  Because of that, we always want more time for testing than we are given.  With the strategies above, we can ensure that the most important things are tested and that with each sprint we are automating more tests, freeing up our valuable time.  

Saturday, November 17, 2018

The One Question to Ask to Improve Your Testing Skills

We've all been in this situation: we've tested something, we think it's working great, and after it goes to Production a customer finds something obvious that we missed.  We can't find all the bugs 100% of the time, but we can increase the number of bugs we find with this one simple question:

"What haven't I tested yet?"  

I have asked this question of myself many times; I make a habit of asking it before I move any feature to Done.  It almost always results in my finding a bug.  The conversation with myself usually goes like this:

Good Tester Me:  "What haven't we tested yet?"  "Well, we haven't tested with an Admin user."
Lazy Tester Me: "Why should that make a difference?  This feature doesn't have anything to do with user privileges."
Good Tester Me: "That may be the case, but we should really test it anyway, to be thorough."
Lazy Tester Me: "But I've been testing this feature ALL DAY!  I want to move on to something else."
Good Tester Me: "You know that we always find the bugs in the last things we think of to test.  TEST IT!"

And I'm always happy I did.  Even if I don't find a bug, I have the peace of mind that I tested everything I could think of, and I've gained valuable product knowledge that I can share with others.




When I ask myself this question, here are twelve follow-up questions I ask:

Did I test with more than one user? 
It seems so obvious, but we are often so embroiled in testing a complicated feature that we don't think to test it with more than our favorite test user.  Even something as simple as the first letter of a last name could be enough to trigger different behavior in a feature.

Did I test with different types of users?
Users often come with different privileges.  When I was first starting out in testing, I would often test with an admin user, because it was the easiest thing to do.  Finding out I'd missed a bug where a regular user didn't have access to a feature they should have taught me a valuable lesson!

Did I test with more than one account/company? 
For those of us testing B2B applications, we often have customers from different accounts or companies. I missed a bug once where the company ID started with a 0, and the new feature hadn't been coded to handle that.

Did I test this on mobile?
Anyone who has ever tested an application on mobile or tablet knows that it can behave very differently from what is seen on a laptop.  You don't want your users to be unable to click a "Submit" button because it's off-screen and can't be accessed.

Did I test this on more than one browser? 

Browsers have more parity in behavior than they did a few years ago, but even so, you will be occasionally surprised by a link that will work in some browsers but not others.

Did I try resizing the browser?
I often forget to do this.  One things I've discovered when resizing is that the scroll bar can disappear, making it impossible for users to scroll through records.

Did I test with the Back button? 

This seems so simple, but a lot of bugs can crop up here!  Also be sure to test the Cancel button on a form.

Is this feature on any other pages, and have we tested on those pages? 
This one recently tripped up my team.  We forgot to test our feature on a new page that's currently in beta.  Be sure to mentally run through all the pages in your application and ask yourself if your feature will be on those pages.  If you have a really large application, you may want to ask testers from other teams in your organization.

Did I test to make sure that this feature works with other features? 
Always think about combining your features.  Will your search feature work with your notification feature?  Will your edit feature work with your sorting feature? And so on.

Have I run negative tests on this feature? 
This is one that's easy to forget when you are testing a complicated feature.  You may be so focused on getting your application configured correctly for testing that you don't think about what happens when bad data is passed in.  For UI tests, be sure to test the limits of every text field, and verify that the user gets appropriate error messages.  For API tests, be sure to pass in invalid data in the test body, and try using bad query parameters.  Verify that you get 400-level responses for invalid requests rather than a generic 500 response.

Have I run security tests on this feature?
It's a sad fact of life that not all of our end users will be legitimate users of our application.  There will be bad actors looking for security flaws to exploit.  This is especially true for financial applications and ones with a lot of personally identifiable information (PII).  Protect your customers by running security scans on your features.

Have I checked the back-end database to make sure that data is being saved as I expected?

When you fill out and submit a form in your application, a success message is not necessarily an indication that the data's been saved.  There could be a bug in your software that causes an error when writing to the database.  Even if the data has been saved, it could have been saved inaccurately, or there may be an error when retrieving the data.  For example, a phone number might be saved with parentheses and dashes, but when the data is retrieved the front-end doesn't know how to parse those symbols, so the phone number isn't displayed.  Always check your back-end data for accuracy.

How is the end user going to use this feature?  Have I run through that scenario?

It's so easy to get wrapped up in our day-to-day tasks of testing, writing automation, and working with our team that we forget about the end user of our application.  You should ALWAYS understand how your user will be using your feature.  Think about what journey they will take.  For example, in an e-commerce app, if you're testing that you can pay with PayPal, make sure you also run through a complete journey where you add a product to your cart, go to the checkout page, and then pay with PayPal.

Missing a bug that then makes it to Production can be humbling!  But it happens to everyone.  The good news is that every time this happens, we learn a new question to ask ourselves before we stop testing, making it more likely that we'll catch that bug next time.

What questions do you ask yourself before you call a feature Done?  Let me know in the comments section!  

Saturday, November 10, 2018

Five Strategies for Managing Test Automation Data

Has this ever happened to you?  You arrive at work in the morning to find that many of your nightly automated tests have failed.  Upon investigation, you discover that your test user has been edited or deleted.  Your automation didn't find a bug, and your test isn't flaky; it simply didn't work because the data you were expecting wasn't there.  In this week's post, I'll take a look at five different strategies for managing test data, and when you might use each.


Strategy One: Using data that is already present in the system

This is the easiest strategy- there's nothing to do for setup- but it is also the most risky.  Even if you label your user with "DO NOT REMOVE" there's always a chance that some absent-minded person will delete it.  

However, this strategy can work well if you are just making simple requests.  For example, if you are testing getting a list of contacts, you can assert that contacts were returned.  For the purposes of your test, it doesn't matter what contacts were returned; you just need to know that some contacts were returned.  

Strategy Two: Updating or creating data as a setup step

Most automated test platforms offer the ability to create a setup step that either runs before each test or before a suite of tests.  This strategy works well if it's easy to create or update the record you want to use.  I have a suite of automated API tests that test adding and updating a user's contact information.  Before the tests begin, I run requests that delete the user's email addresses and phone numbers.  

The downside to this strategy is that sometimes my requests to delete the user's contact information fail.  When this happens, my tests fail.  Also, updating data as a setup step adds more time to the test suite, which is something to consider when you need fast results.

Strategy Three:  Using test steps to create and delete data

This is a good strategy when you are testing CRUD (Create, Read, Update, Delete) operations, because you can use the actual tests to create and delete your test data.  If I was testing an API for a contact list, for example, I would have my first test create the contact and assert that the contact was created.  Then I would update the contact and assert that the contact was updated.  Finally, I would delete the contact and assert that the contact was deleted.  There is no impact to the database, because I am both creating and destroying the data.  

However, if one of the tests fails, it's likely the others will as well.  If for some reason the application was unable to create the contact, the second test would fail, because there would be nothing to update.  And the third test would fail because the record would not exist to be deleted.  So even though there was only one bug, you'd have three test failures.

Strategy Four:  Taking a snapshot of the database and restoring it after the tests

This strategy is helpful when your tests are doing a lot of data manipulation.  You take a snapshot of the database as a setup step for the test suite.  Then you can manipulate all the data you want, and as a cleanup step, you restore the database to its original state.  The advantage to this method is that you don't need to write a lot of steps to undo all the changes to your data.  

But this method relies on having the right data there to begin with.  For instance, if you are planning to do a lot of processing on John Smith's records, and someone happened to delete John Smith before you ran your tests, taking a snapshot of the database won't help; John Smith simply won't be there to test on.  It's also possible that taking a snapshot will be time-consuming, depending on the size of your database.

Strategy Five: Creating a mini-database with the data you need for your tests

In this strategy, you spin up your own database with only the data you need for testing, and when your tests have finished, you destroy the database.  If you are using Microsoft technologies, you could do this with their DACPAC functionality; or you are using Docker, you could create your own database as part of your Docker instance.  With this strategy, there is no possibility of your data ever being incorrect, because it is always brand-new and exactly how you configured it.  Also, because your database will be smaller than your real QA environment database, your tests will likely execute more quickly.  

The downside to this strategy is that it requires a lot of preparation.  You may have to do a lot of research on how your data tables relate to each other in order to determine what data you need.  And you'll need to do a fair amount of coding or configuration to set up the creation and destruction steps.  But in a situation where you want to be sure that your data is right for testing, such as when a developer has just committed new code, this solution is ideal.

All of these strategies can be useful, depending on your testing needs.  When you evaluate how accurate you need your data to be, how likely it is that it will be altered by someone else, how quickly you need the tests to run, and how much you can tolerate the occasional failure, it will be clear which strategy to choose.  


Saturday, November 3, 2018

What to Put in a Smoke Test

The term "smoke test" is usually used to describe a suite of basic tests that verify that all the major features of an application are working.  Some use the smoke test to determine whether a build is stable and ready for further testing.  I usually use a smoke test as the final check in a deploy to production.  In today's post, I'll share a cautionary tale about what can happen if you don't have a smoke test.  Then I'll continue that tale and talk about how smoke tests can go wrong.



Early in my testing career, I worked for a company that had a large suite of manual regression tests, but no smoke test.  Each software release was difficult, because it was impossible to run all the regression tests in a timely fashion.  With each release, we picked which tests we thought would be most relevant to the software changes and executed those tests.

One day, in between releases, we heard that there had been a customer complaint that our Global Search feature wasn't working.  We investigated and found that the customer was correct.  We investigated further and discovered that the feature hadn't worked in weeks, and none of us had noticed.  This was quite embarrassing for our QA team!

To make sure that this kind of embarrassment never happened again, one of our senior QA engineers created a smoke test to run whenever there was a release to production.  It included all the major features, and could be run fairly quickly.  We felt a lot better about our releases after that.

However, the tester who created the test kept adding test steps to the Smoke Test.  Every time a new feature was created, a step was added to the smoke test.  If we found a new bug in a feature, even it was a small one, a step checking for the bug was added to the smoke test.  As the months went on, the smoke test took longer and longer to execute and became more and more complicated.  Eventually the smoke test itself took so much time that we didn't have time to run our other regression tests.

Clearly there needs to be a happy medium between having no smoke test at all, and having one that takes so long to run that it's no longer a smoke test.  In order to decide what goes in a smoke test, I suggest asking these three questions:

1. What would absolutely embarrass us if it were broken in this application?

Let's use an example of an e-commerce website to consider this question.  For this type of website, it would be embarrassing or even catastrophic if a customer couldn't:
  • search for an item they were looking for
  • add an item to their cart
  • log in to their account
  • edit their information
So at the very least, a smoke test for this site should include a test for each of these features.

2. Is this a main feature of the application?

Examples of features in an e-commerce website that would be main features, but less crucial ones, might be:
  • wish list functionality
  • product reviews
  • recommendations for the user
If these features were broken, it wouldn't be catastrophic, but they are features that customers expect.  So a test for each one should be added.

3. If there was a bug here, would it stop the application from functioning?

No one wants to have bugs in their application!  But some bugs are more important than others.  If the e-commerce website had an issue where their "Add to Cart" button was off-center, it might look funny, but it wouldn't stop customers from shopping.  

But a bug where a customer couldn't remove an item from their cart might keep them from checking out with the items they want, which would affect sales.  So a test to check that items can be removed from a cart would be important in a Smoke Test.

With these questions in mind, here is an example of a smoke test that could be created for an e-commerce site:

1. Log in
2. Verify product recommendations are present
3. Do a search for a product
4. Read a review of a product
5. Add an item to the cart
6. Add a second item to the cart and then delete it
7. Edit customer information
8. Check out
9. Write a review

A smoke test like this wouldn't take very long to execute manually, and it would also be easy to automate.  

Whenever new features are added to the application, you should ask yourself the first two questions to determine whether a test for the feature should be added to the smoke test.  And whenever a bug is found in the product, you should ask yourself the third question to determine whether a test for that issue should be added to the smoke test.

Because we want our applications to be of high quality, it's easy to fall into the trap of wanting to test everything, all the time.  But that can create a test burden that keeps us so busy that we don't have time for anything else.  Creating a simple, reliable smoke test can free us up for other activities, such as doing exploratory testing on new features or creating nightly automated tests.  

New Blog Location!

I've moved!  I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted.  So I&#...