A few years ago, I wrote a blog post detailing why I thought toggles were a bad idea. It made a clever analogy between toggles and the tribbles on Star Trek's U.S.S. Enterprise. I think it's a fun read, so you may want to check it out; but since the time I wrote it, my opinion has changed a bit. In this post I will explain why I think toggles may be helpful, and I'll propose some rules for their use.
About a year ago, my team was working on a new notification service that would send out emails and messages more efficiently than the current service. When the new service was ready, we migrated one notification type to the new service to see how it would work. We tested the notification extensively and we were sure that we had accounted for all scenarios, so we took the new service to Production.
A couple of weeks later, we discovered that there was an odd case that we hadn't tested. If two users in the same company had the same id, the wrong user was getting the notification. We had no idea that it was possible for two users in the same company to have the same id, so we hadn't thought to test this.
Fortunately, our new service was behind a toggle. Since we certainly didn't want the wrong people to get notifications, we quickly toggled off the new service. There was no impact to any other customers, because they were still getting their notifications; they were just being notified through the old service. We were able to quickly fix the bug, get the fix into Production, and toggle the service back on.
If we hadn't had the toggle, the users with the same id would have continued to get the wrong notifications until we were able to fix the bug. We would have had to rush to get a code patch into Production, and it's possible that we would have made mistakes along the way. Because we had the toggle, we could take the time to make sure that the fix was good, and we could do all the regression testing we wanted.
So, I've changed my mind about toggles. I think they can be useful in situations where there's a significant risk that accompanies a change. But if you are going to use toggles, please observe the following rules:
1. Toggles are NOT a substitute for high-quality testing. Being able to toggle something off at the first sign of trouble does not mean that you can skip testing your new feature thoroughly. Ideally you should have tested so well that you never need to turn your toggle off.
2. Make sure to test your feature with the toggle on AND with the toggle off. You don't want to discover in the middle of dealing with a problem in Production that the toggle doesn't actually work!
3. When the feature has gone to Production and a certain amount of time has passed, remove the toggle so that the feature is on permanently. Otherwise you could get into a situation where months from now someone inadvertently toggles the feature off. And the fewer toggles you have in your application, the fewer combinations of toggles you need to test.
As with many things in software development, the best strategies are those that ensure the best possible outcome for our end users. When they are used wisely, toggles can help mitigate any unexpected issues found in Production.
Email Subscription Form
Thursday, September 26, 2019
Saturday, September 21, 2019
What I Learned at POST/CON Part II: Assertions and Scripts Everywhere!
Last week, I wrote about how I had just returned from the annual Postman users' conference, and how I was so excited about everything I had learned there! I'm still talking to anyone who will listen about all the great things Postman can do. In this week's post, I'm going to show you how you can create variables, assertions, and headers for collections and folders.
Those of you who are familiar with Postman or who have read my previous blog posts on the subject know that a Postman collection is simply a group of requests. Requests in a collection can also be grouped into folders. Here's an example of a collection with more than one folder:
The name of the collection is "Contact List", and it has three folders in it: "Happy Path", "Required and Null Fields", and "Sad Path". Each of the folders has requests in it, but currently only the "Happy Path" folder is open so you can view the requests.
If I hover over the Contact List collection name, I'll see a three-dot menu. I can click on this menu icon and choose Edit. When the Contact List editor window appears, it looks like this:
Notice that there are tabs for Authentication, Pre-request Scripts, Tests, and Variables. If I want to add a collection-level variable, I can simply click on the variables tab and enter in my variable name and value. We can do something similar to add an authorization token, a pre-request script, or a test.
We can do the same thing at the folder level. There is also a three-dot menu to the right of the "Happy Path" folder, and if I hover over either of the two other folders I'll see the three-dot menu there as well. If I click on the three-dot menu next to the "Happy Path" folder, and choose "Edit", I'll be presented with this window:
Looks familiar, doesn't it? The only difference between this folder window and the collection window is that there is no place to add variables. Here I can add authentication, pre-request scripts, and tests, just as I could at the collection level or request level.
Why is this so helpful?
Putting your authentication, pre-request scripts, and tests at the collection or folder level is helpful because it keeps you from having to type the same things again and again!
Here are four examples of how you can use this feature:
1. Assert on response time at the collection level
You may have a service-level agreement (SLA) on your API that states that the consumers of your API should get a response within a certain number of milliseconds. Even if you don't have an SLA, you probably want to be alerted if requests that used to take two milliseconds are now taking ten seconds to run. But to copy and paste this assertion into every request is time consuming! Instead you can put the assertion at the collection level, like this:
Now this response-time assertion will run with every single request in your collection.
2. Move your variables out of your environments and into your collections
You probably test your APIs in more than one environment, such as Dev, QA, Staging, and Production. Each environment probably has some variables that differ between each environment, such as a URL value. But there are probably many variables that stay the same in each environment, and these variables can be put at the collection level to avoid repetition. Let's look at an example. Let's say I have a set of variables for my QA environment:
And I have another set of variables for my Prod environment:
When you examine the two environments, you can see that the only variable that is different between the two is the URL. So why not take the firstName, lastName, email, and phone variables and put them in the Collection variables instead?
Now you can remove all the repetitive variables from your environments, making them much easier to maintain.
IMPORTANT NOTE! When you move your variables from an environment to your collection, you will need to reference them differently in your assertions. Instead of:
pm.expect(jsonData.firstName).to.eql(environment.firstName);
You will need to use:
pm.expect(jsonData.firstName).to.eql(pm.variables.get("firstName"));
3. Set authentication at the collection level
Much of what I test with APIs requires an authentication token. It's a pain to have to add authentication to the header on every request. If the token you are using will be the same throughout your collection, you can set the authentication at the collection level instead.
Here's an example, using Mark Winteringham's awesome Restful-Booker API. Some of the requests in this API require a token, using this format:
Where {{cookie}} is the token that I've saved as a variable. I can set the authentication at the collection level like this:
And that header will be sent with every request I make. Note that there are many different types of authentication, so you'll need to modify your collection settings to use the right type for your API.
4. Use a pre-request script to create a variable at the folder level
Suppose you have a folder with requests that will all require a randomly-generated GUID, and you want the GUID to be different for each request. Rather than put instructions for generating a GUID in the pre-request script section of every single request, you can put the instructions at the folder level, like this:
Those of you who are familiar with Postman or who have read my previous blog posts on the subject know that a Postman collection is simply a group of requests. Requests in a collection can also be grouped into folders. Here's an example of a collection with more than one folder:
The name of the collection is "Contact List", and it has three folders in it: "Happy Path", "Required and Null Fields", and "Sad Path". Each of the folders has requests in it, but currently only the "Happy Path" folder is open so you can view the requests.
If I hover over the Contact List collection name, I'll see a three-dot menu. I can click on this menu icon and choose Edit. When the Contact List editor window appears, it looks like this:
Notice that there are tabs for Authentication, Pre-request Scripts, Tests, and Variables. If I want to add a collection-level variable, I can simply click on the variables tab and enter in my variable name and value. We can do something similar to add an authorization token, a pre-request script, or a test.
We can do the same thing at the folder level. There is also a three-dot menu to the right of the "Happy Path" folder, and if I hover over either of the two other folders I'll see the three-dot menu there as well. If I click on the three-dot menu next to the "Happy Path" folder, and choose "Edit", I'll be presented with this window:
Looks familiar, doesn't it? The only difference between this folder window and the collection window is that there is no place to add variables. Here I can add authentication, pre-request scripts, and tests, just as I could at the collection level or request level.
Why is this so helpful?
Putting your authentication, pre-request scripts, and tests at the collection or folder level is helpful because it keeps you from having to type the same things again and again!
Here are four examples of how you can use this feature:
1. Assert on response time at the collection level
You may have a service-level agreement (SLA) on your API that states that the consumers of your API should get a response within a certain number of milliseconds. Even if you don't have an SLA, you probably want to be alerted if requests that used to take two milliseconds are now taking ten seconds to run. But to copy and paste this assertion into every request is time consuming! Instead you can put the assertion at the collection level, like this:
Now this response-time assertion will run with every single request in your collection.
2. Move your variables out of your environments and into your collections
You probably test your APIs in more than one environment, such as Dev, QA, Staging, and Production. Each environment probably has some variables that differ between each environment, such as a URL value. But there are probably many variables that stay the same in each environment, and these variables can be put at the collection level to avoid repetition. Let's look at an example. Let's say I have a set of variables for my QA environment:
And I have another set of variables for my Prod environment:
When you examine the two environments, you can see that the only variable that is different between the two is the URL. So why not take the firstName, lastName, email, and phone variables and put them in the Collection variables instead?
Now you can remove all the repetitive variables from your environments, making them much easier to maintain.
IMPORTANT NOTE! When you move your variables from an environment to your collection, you will need to reference them differently in your assertions. Instead of:
pm.expect(jsonData.firstName).to.eql(environment.firstName);
You will need to use:
pm.expect(jsonData.firstName).to.eql(pm.variables.get("firstName"));
3. Set authentication at the collection level
Much of what I test with APIs requires an authentication token. It's a pain to have to add authentication to the header on every request. If the token you are using will be the same throughout your collection, you can set the authentication at the collection level instead.
Here's an example, using Mark Winteringham's awesome Restful-Booker API. Some of the requests in this API require a token, using this format:
Where {{cookie}} is the token that I've saved as a variable. I can set the authentication at the collection level like this:
And that header will be sent with every request I make. Note that there are many different types of authentication, so you'll need to modify your collection settings to use the right type for your API.
4. Use a pre-request script to create a variable at the folder level
Suppose you have a folder with requests that will all require a randomly-generated GUID, and you want the GUID to be different for each request. Rather than put instructions for generating a GUID in the pre-request script section of every single request, you can put the instructions at the folder level, like this:
This script will run before every request in the folder and will assign a randomly-generated GUID to the variable "id", ensuring that the id will be different for each request.
These examples are just some of the things you can do at the collection and folder levels. I hope you will use these as a starting point to making your Postman tests more efficient and maintainable!
Saturday, September 14, 2019
What I Learned at POST/CON Part I: Examples and Mocking
I've just returned from POST/CON, the annual Postman users' conference, and I am so excited about everything I learned there! So excited, in fact, that I'm going to devote not one, but TWO blog posts to sharing my findings.
If you aren't already using Postman for your API testing, why on earth not? It's the best API testing tool out there! My opinion was reinforced this week, when I learned just how easy it is to create API examples and mock responses. I'll be teaching you how to do both things in today's post.
The instructions in this post will be for the free version of Postman, and I'll be using version 7. Your results will look slightly different in version 6; and the Pro version of Postman will have more functionality for the documentation and mocks than will be described here.
First, examples: why would you want to create an example of a request? Because it's a great way to show other people on your team how the request is supposed to work. It can also be used to create documentation for your API.
The request I'll be using for today's post is one of the GET requests from Mark Winteringham's wonderful API, Restful-Booker. Here is the request and the response:
You can see that this is a GET request that is asking for the booking with the ID of 1, and that the response returns the booking with the first and last name, checkin and checkout dates, and other details.
If we wanted to teach someone else on our team how this request works, we could simply share this request with them. But- what if the API is still being created and doesn't work yet? Or what if our teammate doesn't have the correct authentication access to the request, and can't run it? We can show them exactly what's supposed to happen with the request by creating an example. In the upper right corner of the Postman window, just underneath the environment dropdown, is a link that says "Examples":
Click on this link, then click "Add Example". A new request tab will open up with the same name as the original GET request, prefaced by "e.g":
The request will already have the HTTP verb, the URL, and any headers or request parameters set. All you need to do to finish the example is to paste in an example of what the request response should be:
and set the appropriate response code:
Save the example, and return to the original GET request. You'll see that now there is an example request listed in the top right:
Now anyone who sees the request can click on the Examples link to see exactly how the request is supposed to behave.
Examples are also great for showing how an API request should behave in negative scenarios, such as when an id is not found, or when a user is not authenticated.
Another great feature of examples is that you can use them in your documentation. You don't need to have examples to create documentation, but it makes your documentation much easier to understand. To create documentation in Postman, you need a collection with requests. Click on the three-dot menu beside the collection, and choose "Publish Docs":
A "Publish Collection" web page will open. Select an environment if you like, and click the Publish Collection button. Once published, you'll see a URL that you can go to view your documentation. Go to that URL, and you'll see your request with your example response!
You can also use your example requests to create a mock server. This can be used whenever you don't have access to the actual API server, such as when a feature is still being developed or when you are doing contract testing with another API.
To set up a mock server, simply click on the three-dot menu beside the collection name and choose "Mock Collection". You'll be presented with a pop-up window like this:
Give your mock collection a name, and click the "Create mock server" button. You'll be assigned a special mock server that looks like this: https://<some guid>.mock.pstmn.io.
This mock server is designed to return the response you created in your GET example whenever you make a request with an endpoint that matches the GET example. Copy the URL for the mock server and paste it in a GET request, and then add the appropriate endpoint for your example: /booking/1. Click send, and you should get this response:
Now you can save this request, naming it something like MOCK Get Booking, and you can save it to a collection called something like MOCK Restful Booker.
You can create examples and mock requests like this for every request in an API, and when you have finished, you will have a complete documentation of your API as well as a mock server that will allow you call an API and get an appropriate response without actually connecting to the API's server!
I hope you find this helpful in your work with APIs. Next week, I'll have more great knowledge from POST/CON to share!
If you aren't already using Postman for your API testing, why on earth not? It's the best API testing tool out there! My opinion was reinforced this week, when I learned just how easy it is to create API examples and mock responses. I'll be teaching you how to do both things in today's post.
The instructions in this post will be for the free version of Postman, and I'll be using version 7. Your results will look slightly different in version 6; and the Pro version of Postman will have more functionality for the documentation and mocks than will be described here.
First, examples: why would you want to create an example of a request? Because it's a great way to show other people on your team how the request is supposed to work. It can also be used to create documentation for your API.
The request I'll be using for today's post is one of the GET requests from Mark Winteringham's wonderful API, Restful-Booker. Here is the request and the response:
You can see that this is a GET request that is asking for the booking with the ID of 1, and that the response returns the booking with the first and last name, checkin and checkout dates, and other details.
If we wanted to teach someone else on our team how this request works, we could simply share this request with them. But- what if the API is still being created and doesn't work yet? Or what if our teammate doesn't have the correct authentication access to the request, and can't run it? We can show them exactly what's supposed to happen with the request by creating an example. In the upper right corner of the Postman window, just underneath the environment dropdown, is a link that says "Examples":
Click on this link, then click "Add Example". A new request tab will open up with the same name as the original GET request, prefaced by "e.g":
The request will already have the HTTP verb, the URL, and any headers or request parameters set. All you need to do to finish the example is to paste in an example of what the request response should be:
and set the appropriate response code:
Save the example, and return to the original GET request. You'll see that now there is an example request listed in the top right:
Now anyone who sees the request can click on the Examples link to see exactly how the request is supposed to behave.
Examples are also great for showing how an API request should behave in negative scenarios, such as when an id is not found, or when a user is not authenticated.
Another great feature of examples is that you can use them in your documentation. You don't need to have examples to create documentation, but it makes your documentation much easier to understand. To create documentation in Postman, you need a collection with requests. Click on the three-dot menu beside the collection, and choose "Publish Docs":
A "Publish Collection" web page will open. Select an environment if you like, and click the Publish Collection button. Once published, you'll see a URL that you can go to view your documentation. Go to that URL, and you'll see your request with your example response!
You can also use your example requests to create a mock server. This can be used whenever you don't have access to the actual API server, such as when a feature is still being developed or when you are doing contract testing with another API.
To set up a mock server, simply click on the three-dot menu beside the collection name and choose "Mock Collection". You'll be presented with a pop-up window like this:
Give your mock collection a name, and click the "Create mock server" button. You'll be assigned a special mock server that looks like this: https://<some guid>.mock.pstmn.io.
This mock server is designed to return the response you created in your GET example whenever you make a request with an endpoint that matches the GET example. Copy the URL for the mock server and paste it in a GET request, and then add the appropriate endpoint for your example: /booking/1. Click send, and you should get this response:
Now you can save this request, naming it something like MOCK Get Booking, and you can save it to a collection called something like MOCK Restful Booker.
You can create examples and mock requests like this for every request in an API, and when you have finished, you will have a complete documentation of your API as well as a mock server that will allow you call an API and get an appropriate response without actually connecting to the API's server!
I hope you find this helpful in your work with APIs. Next week, I'll have more great knowledge from POST/CON to share!
Saturday, September 7, 2019
Your Test Cases Are Slowing You Down
One of the first QA jobs I had was a position at a company that made software that could be used to create mobile applications. It was a very complex application, with so many features that it was often hard to keep track of them all. Shortly before I started working there, the company had adopted a test tracking system to keep track of all of the possible manual tests the team might want to run. This amounted to thousands of test cases.
Many of the test cases weren't written well, leaving those of us who were new to the team confused about how to execute them. The solution to this problem was to assign everyone the task of revising the tests as they were run. This helped a bit, but slowed us down tremendously. Adding to the slowdown was the fact that every time we had a software release, our manager had to comb through all the tests and decide which ones should be run. Then there was the added confusion of deciding which mobile devices should be used for each test.
We were trying to transition to an Agile development style, but the number of test cases and the amount of overhead needed to select, run, and update the tests meant that we just couldn't adapt to the pace of Agile testing.
You might be thinking at this point, "Why didn't they automate their testing?" Keep in mind that this was back when mobile test automation was in its infancy. One of our team had developed a prototype for an automated test framework, but we didn't have the resources to implement it because we were so busy trying to keep up with our gigantic manual test case library.
Even when you have a robust set of automated tests in place, you'll still want to do some manual testing. Having a pair of eyes and hands on an application is a great way to discover odd behavior that you'll want to investigate further. But trying to maintain a vast library of manual tests is so time consuming that you may find that you don't have time to do anything else!
In my opinion, the easiest and most efficient way to keep a record of what manual tests should be executed is through using simple spreadsheets. If I were to go back in time to that mobile app company, I would toss out the test case management system and set up some spreadsheets. I would have one smoke test spreadsheet; and one regression test spreadsheet for each major feature of the application. Each time a new feature was added, I'd create a test plan on a spreadsheet, and once the feature was released, I'd either add a few test cases to a regression test spreadsheet (if the feature was minor), or I'd adapt my test plan into a new regression test spreadsheet for that feature.
This is probably a bit hard to imagine, so I'll illustrate with an example. Let's say we have a mobile application called OrganizeIt! Its major features are a To-Do List and a Calendar. Currently the smoke test for the app looks like this:
And then we also have a regression test for the major features: Login, Calendar, and To-Do List. Here's an example of what the regression test for the To-Do List might look like:
Many of the test cases weren't written well, leaving those of us who were new to the team confused about how to execute them. The solution to this problem was to assign everyone the task of revising the tests as they were run. This helped a bit, but slowed us down tremendously. Adding to the slowdown was the fact that every time we had a software release, our manager had to comb through all the tests and decide which ones should be run. Then there was the added confusion of deciding which mobile devices should be used for each test.
We were trying to transition to an Agile development style, but the number of test cases and the amount of overhead needed to select, run, and update the tests meant that we just couldn't adapt to the pace of Agile testing.
You might be thinking at this point, "Why didn't they automate their testing?" Keep in mind that this was back when mobile test automation was in its infancy. One of our team had developed a prototype for an automated test framework, but we didn't have the resources to implement it because we were so busy trying to keep up with our gigantic manual test case library.
Even when you have a robust set of automated tests in place, you'll still want to do some manual testing. Having a pair of eyes and hands on an application is a great way to discover odd behavior that you'll want to investigate further. But trying to maintain a vast library of manual tests is so time consuming that you may find that you don't have time to do anything else!
In my opinion, the easiest and most efficient way to keep a record of what manual tests should be executed is through using simple spreadsheets. If I were to go back in time to that mobile app company, I would toss out the test case management system and set up some spreadsheets. I would have one smoke test spreadsheet; and one regression test spreadsheet for each major feature of the application. Each time a new feature was added, I'd create a test plan on a spreadsheet, and once the feature was released, I'd either add a few test cases to a regression test spreadsheet (if the feature was minor), or I'd adapt my test plan into a new regression test spreadsheet for that feature.
This is probably a bit hard to imagine, so I'll illustrate with an example. Let's say we have a mobile application called OrganizeIt! Its major features are a To-Do List and a Calendar. Currently the smoke test for the app looks like this:
Test | iOS phone | iOS tablet | Android phone | Android tablet |
Log in with incorrect credentials | ||||
Log in with correct credentials | ||||
Add an event | ||||
Edit an event | ||||
Delete an event | ||||
Add a To-Do item | ||||
Edit a To-Do item | ||||
Complete a To-Do item | ||||
Mark a complete item as incomplete | ||||
Delete a To-Do item | ||||
Log out |
And then we also have a regression test for the major features: Login, Calendar, and To-Do List. Here's an example of what the regression test for the To-Do List might look like:
Test | Expected result |
Add an item to the list with too many characters | Error message |
Add an item to the list with invalid characters | Error message |
Add a blank item to the list | Error message |
Add an item to the list with a correct number of valid characters | Item is added |
Close and reopen the application | Item still exists |
Edit the item with too many characters | Error message, and original item still exists |
Edit the item with invalid characters | Error message, and original item still exists |
Edit the item so it is blank | Error message, and original item still exists |
Mark an item as completed | Item appears checked off |
Close and reopen the application | Item still appears checked off |
Mark a completed item as completed again | No change |
Mark a completed item as incomplete | Item appears unchecked |
Mark an incomplete item as incomplete again | No change |
Close and reopen the application | Item still appears unchecked |
Delete the item | Item disappears |
Close and reopen the application | Item is still gone |
This test would also be run on a variety of devices, but I've left that off the chart to make it more readable in this post.
Now let's imagine that our developers have created a new feature for the To-Do List, which is that items on the list can now be marked as Important, and Important items will move to the top of the list. In the interest of simplicity, let's not worry about the order of the items other than the fact that the Important items will be on the top of the list. We'll want to create a test plan for that feature, and it might look like this:
Test | Expected result |
Item at the top of the list is marked Important | Item is now in bold, and remains at the top of the list |
Close and reopen the application | The item is still in bold and on the top of the list |
Item at the middle of the list is marked Important | Item is now in bold, and moves to the top of the list |
Item at the bottom of the list is marked Important | Item is now in bold, and moves to the top of the list |
Close and reopen the application | All important items are still in bold and at the top of the list |
Every item in the list is marked Important | All items are in bold |
Close and reopen the application | All items are still in bold |
Item at the top of the list is marked as normal | The item returns to plain text, and moves below the Important items |
Close and reopen the application | The item is still in plain text, and below the Important items |
Item in the middle of the Important list is marked as normal | The item returns to plain text and moves below the Important items |
Item at the bottom of Important list is marked as normal | The item returns to plain text and is below the Important items |
Close and reopen the application | All important items are still in bold, and normal items are still in plain text |
Delete an important item | Item is deleted |
Close and reopen the application | Item is still gone |
Add an item and mark it as important | The item is added as important, and is added to the top of the list |
Add an item and mark it as normal | The item is added as normal, and is added to the bottom of the list |
Close and reopen the application | The added items appear correctly in the list |
Mark an important item as completed | The item is checked, and remains in bold and at the top of the list |
Close and reopen the application | The item remains checked, in bold, and at the top of the list |
Mark an important completed item as incomplete | The item is unchecked, and remains in bold and at the top of the list |
We would again test this on a variety of devices, but I've left that off the chart to save space.
Once the feature is released, we won't need to test it as extensively, unless there's some change to the feature. So we can add a few test cases to our To-Do List regression test, like this:
Test | Expected result |
Add an item to the list with too many characters | Error message |
Add an item to the list with invalid characters | Error message |
Add a blank item to the list | Error message |
Add an item to the list with a correct number of valid characters | Item is added |
Close and reopen the application | Item still exists |
Add an important item to the list | Item is in bold, and is added to the top of the list |
Edit the item with too many characters | Error message, and original item still exists |
Edit the item with invalid characters | Error message, and original item still exists |
Edit the item so it is blank | Error message, and original item still exists |
Mark an important item as normal | Item returns to plain text and is moved to the bottom of the list |
Mark an item as completed | Item appears checked off |
Mark an important item as completed | Item remains in bold text and appears checked off |
Close and reopen the application | Item still appears checked off |
Mark a completed item as completed again | No change |
Mark a completed item as incomplete | Item appears unchecked |
Mark an incomplete item as incomplete again | No change |
Close and reopen the application | Item still appears unchecked |
Delete the item | Item disappears |
Close and reopen the application | Item is still gone |
Delete an important item | Item disappears |
The new test cases are marked in red, but they wouldn't be in the actual test plan.
Finally, we'd want to add one test to the smoke test to check for this new functionality:
Test | iOS phone | iOS tablet | Android phone | Android tablet |
Log in with incorrect credentials | ||||
Log in with correct credentials | ||||
Add an event | ||||
Edit an event | ||||
Delete an event | ||||
Add a To-Do item | ||||
Add an important To-Do item | ||||
Edit a To-Do item | ||||
Complete a To-Do item | ||||
Mark a complete item as incomplete | ||||
Delete a To-Do item | ||||
Log out |
With spreadsheets like these, you can see how it is easy to keep track of a huge amount of tests in a small amount of space. Adding or removing tests is also easy, because it's just a matter of adding or removing a line to the table.
Spreadsheets like this can be shared among a team, using a product like Google Sheets or Confluence. Each time a smoke or regression test needs to be run, the test can be copied and named with a new date or release number (for example, "1.5 Release" or "September 2019"), and the individual tests can be divided among the test team. For example, each team member could do a complete test pass with a different mobile device. Passing tests can be marked with a check mark or filled in green, and failing tests can be marked with an X or filled in red.
And there you have it! An easy to read, easy to maintain manual test case management system. Instead of taking hours of time maintaining test cases, you can use your time to automate most of your tests, freeing up even more time for manual exploratory testing.
Subscribe to:
Posts (Atom)
New Blog Location!
I've moved! I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted. So I...
-
It's never fun to start your work day and discover that some or all of your nightly automated tests failed. It's especially frustra...
-
It's book review time once again, and this month I read Unit Testing Principles, Practices, and Patterns by Vladimir Khorikov. I thoug...
-
I've moved! I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted. So I...