Many of the test cases weren't written well, leaving those of us who were new to the team confused about how to execute them. The solution to this problem was to assign everyone the task of revising the tests as they were run. This helped a bit, but slowed us down tremendously. Adding to the slowdown was the fact that every time we had a software release, our manager had to comb through all the tests and decide which ones should be run. Then there was the added confusion of deciding which mobile devices should be used for each test.
We were trying to transition to an Agile development style, but the number of test cases and the amount of overhead needed to select, run, and update the tests meant that we just couldn't adapt to the pace of Agile testing.
You might be thinking at this point, "Why didn't they automate their testing?" Keep in mind that this was back when mobile test automation was in its infancy. One of our team had developed a prototype for an automated test framework, but we didn't have the resources to implement it because we were so busy trying to keep up with our gigantic manual test case library.
Even when you have a robust set of automated tests in place, you'll still want to do some manual testing. Having a pair of eyes and hands on an application is a great way to discover odd behavior that you'll want to investigate further. But trying to maintain a vast library of manual tests is so time consuming that you may find that you don't have time to do anything else!
In my opinion, the easiest and most efficient way to keep a record of what manual tests should be executed is through using simple spreadsheets. If I were to go back in time to that mobile app company, I would toss out the test case management system and set up some spreadsheets. I would have one smoke test spreadsheet; and one regression test spreadsheet for each major feature of the application. Each time a new feature was added, I'd create a test plan on a spreadsheet, and once the feature was released, I'd either add a few test cases to a regression test spreadsheet (if the feature was minor), or I'd adapt my test plan into a new regression test spreadsheet for that feature.
This is probably a bit hard to imagine, so I'll illustrate with an example. Let's say we have a mobile application called OrganizeIt! Its major features are a To-Do List and a Calendar. Currently the smoke test for the app looks like this:
And then we also have a regression test for the major features: Login, Calendar, and To-Do List. Here's an example of what the regression test for the To-Do List might look like:
This test would also be run on a variety of devices, but I've left that off the chart to make it more readable in this post.
Now let's imagine that our developers have created a new feature for the To-Do List, which is that items on the list can now be marked as Important, and Important items will move to the top of the list. In the interest of simplicity, let's not worry about the order of the items other than the fact that the Important items will be on the top of the list. We'll want to create a test plan for that feature, and it might look like this:
We would again test this on a variety of devices, but I've left that off the chart to save space.
Once the feature is released, we won't need to test it as extensively, unless there's some change to the feature. So we can add a few test cases to our To-Do List regression test, like this:
The new test cases are marked in red, but they wouldn't be in the actual test plan.
Finally, we'd want to add one test to the smoke test to check for this new functionality:
With spreadsheets like these, you can see how it is easy to keep track of a huge amount of tests in a small amount of space. Adding or removing tests is also easy, because it's just a matter of adding or removing a line to the table.
Spreadsheets like this can be shared among a team, using a product like Google Sheets or Confluence. Each time a smoke or regression test needs to be run, the test can be copied and named with a new date or release number (for example, "1.5 Release" or "September 2019"), and the individual tests can be divided among the test team. For example, each team member could do a complete test pass with a different mobile device. Passing tests can be marked with a check mark or filled in green, and failing tests can be marked with an X or filled in red.
And there you have it! An easy to read, easy to maintain manual test case management system. Instead of taking hours of time maintaining test cases, you can use your time to automate most of your tests, freeing up even more time for manual exploratory testing.