Email Subscription Form

Saturday, November 17, 2018

The One Question to Ask to Improve Your Testing Skills

We've all been in this situation: we've tested something, we think it's working great, and after it goes to Production a customer finds something obvious that we missed.  We can't find all the bugs 100% of the time, but we can increase the number of bugs we find with this one simple question:

"What haven't I tested yet?"  

I have asked this question of myself many times; I make a habit of asking it before I move any feature to Done.  It almost always results in my finding a bug.  The conversation with myself usually goes like this:

Good Tester Me:  "What haven't we tested yet?"  "Well, we haven't tested with an Admin user."
Lazy Tester Me: "Why should that make a difference?  This feature doesn't have anything to do with user privileges."
Good Tester Me: "That may be the case, but we should really test it anyway, to be thorough."
Lazy Tester Me: "But I've been testing this feature ALL DAY!  I want to move on to something else."
Good Tester Me: "You know that we always find the bugs in the last things we think of to test.  TEST IT!"

And I'm always happy I did.  Even if I don't find a bug, I have the peace of mind that I tested everything I could think of, and I've gained valuable product knowledge that I can share with others.




When I ask myself this question, here are twelve follow-up questions I ask:

Did I test with more than one user? 
It seems so obvious, but we are often so embroiled in testing a complicated feature that we don't think to test it with more than our favorite test user.  Even something as simple as the first letter of a last name could be enough to trigger different behavior in a feature.

Did I test with different types of users?
Users often come with different privileges.  When I was first starting out in testing, I would often test with an admin user, because it was the easiest thing to do.  Finding out I'd missed a bug where a regular user didn't have access to a feature they should have taught me a valuable lesson!

Did I test with more than one account/company? 
For those of us testing B2B applications, we often have customers from different accounts or companies. I missed a bug once where the company ID started with a 0, and the new feature hadn't been coded to handle that.

Did I test this on mobile?
Anyone who has ever tested an application on mobile or tablet knows that it can behave very differently from what is seen on a laptop.  You don't want your users to be unable to click a "Submit" button because it's off-screen and can't be accessed.

Did I test this on more than one browser? 

Browsers have more parity in behavior than they did a few years ago, but even so, you will be occasionally surprised by a link that will work in some browsers but not others.

Did I try resizing the browser?
I often forget to do this.  One things I've discovered when resizing is that the scroll bar can disappear, making it impossible for users to scroll through records.

Did I test with the Back button? 

This seems so simple, but a lot of bugs can crop up here!  Also be sure to test the Cancel button on a form.

Is this feature on any other pages, and have we tested on those pages? 
This one recently tripped up my team.  We forgot to test our feature on a new page that's currently in beta.  Be sure to mentally run through all the pages in your application and ask yourself if your feature will be on those pages.  If you have a really large application, you may want to ask testers from other teams in your organization.

Did I test to make sure that this feature works with other features? 
Always think about combining your features.  Will your search feature work with your notification feature?  Will your edit feature work with your sorting feature? And so on.

Have I run negative tests on this feature? 
This is one that's easy to forget when you are testing a complicated feature.  You may be so focused on getting your application configured correctly for testing that you don't think about what happens when bad data is passed in.  For UI tests, be sure to test the limits of every text field, and verify that the user gets appropriate error messages.  For API tests, be sure to pass in invalid data in the test body, and try using bad query parameters.  Verify that you get 400-level responses for invalid requests rather than a generic 500 response.

Have I run security tests on this feature?
It's a sad fact of life that not all of our end users will be legitimate users of our application.  There will be bad actors looking for security flaws to exploit.  This is especially true for financial applications and ones with a lot of personally identifiable information (PII).  Protect your customers by running security scans on your features.

Have I checked the back-end database to make sure that data is being saved as I expected?

When you fill out and submit a form in your application, a success message is not necessarily an indication that the data's been saved.  There could be a bug in your software that causes an error when writing to the database.  Even if the data has been saved, it could have been saved inaccurately, or there may be an error when retrieving the data.  For example, a phone number might be saved with parentheses and dashes, but when the data is retrieved the front-end doesn't know how to parse those symbols, so the phone number isn't displayed.  Always check your back-end data for accuracy.

How is the end user going to use this feature?  Have I run through that scenario?

It's so easy to get wrapped up in our day-to-day tasks of testing, writing automation, and working with our team that we forget about the end user of our application.  You should ALWAYS understand how your user will be using your feature.  Think about what journey they will take.  For example, in an e-commerce app, if you're testing that you can pay with PayPal, make sure you also run through a complete journey where you add a product to your cart, go to the checkout page, and then pay with PayPal.

Missing a bug that then makes it to Production can be humbling!  But it happens to everyone.  The good news is that every time this happens, we learn a new question to ask ourselves before we stop testing, making it more likely that we'll catch that bug next time.

What questions do you ask yourself before you call a feature Done?  Let me know in the comments section!  

4 comments:

  1. How can I achieve all these activity of testing like cross browser testing, database testing, security testing when there is very limited time?

    ReplyDelete
    Replies
    1. Thank you so much for your question, Bharath! You have given me an idea for my next post! To begin with, you don't have to test every test case in every scenario. To answer the question "Have I tested this on more than one browser?", you could simply run a Happy Path scenario on a different browser than you used before. If everything looks and works fine there, you move on to the next question. To check the back-end data when you are testing a form, you can do one POST request with all possible form fields, and if the data is written to the database, you can move on. I'll have more ideas for you in this weekend's blog post!

      Delete
  2. Thanks you for your response. I appreciate your post "What to Test When There's Not Enough Time to Test" it was a good read.

    ReplyDelete

New Blog Location!

I've moved!  I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted.  So I...