Email Subscription Form

Saturday, January 5, 2019

The Automation Test Wheel in Practice

Last week's blog post, "Rethinking the Pyramid: The Automation Test Wheel", sparked many interesting discussions on LinkedIn, Twitter, and in the comments section of this blog!  The general consensus was that the Test Pyramid is still useful because it reminds us that tests closest to the code are the fastest and most reliable to run, and that the Automation Test Wheel reminds us to make sure to include categories such as security, accessibility, and performance testing.  Also, a reader pointed us to Abstracta's Software Testing Wheel, which looks at the definition of quality from a number of different perspectives.

This week I'm talking about how to put the Automation Test Wheel into practice.  Let's imagine that I have a simple web app called Contact List.  It allows a user to log in, view a list of their contacts, and add new contacts.  I want to design a complete automation strategy for this application that will enable my team to deploy all the way up to production confidently.  In order to feel confident about the quality of my application, I'll want to be sure to include tests from every segment of the Automation Test Wheel.


Unit Tests: I will make sure that every function of my code has at least one unit test.  I'll run these tests using mock objects.  For example, I will create a list of mock contacts and a mock new contact, add the new contact, and verify that the new contact has been added to the list of mock contacts.  I'll update a contact with new data and verify that the contact has been updated in the list.  I'll create a mock contact with invalid data and verify that attempting to add the contact results in an appropriate error.  These are just some examples; for each function in my app, I'll want to have several tests which exercise all possible code paths.

Component Tests:  My application is very simple and relies on just one database.  The database is used for both authentication and for retrieving the contact data.  I will include one test for each function; I'll send an authentication request for a valid user and verify that the user is authenticated, and I'll make one request to the database to retrieve a known contact, and verify that the contact is retrieved.

Services Tests: My application has an API which allows me to do CRUD operations (Create, Read, Update, Delete) on my contacts.  I have a GET endpoint which allows me to retrieve the list of contacts, and a GET endpoint which allows me retrieve one specific contact.  I have a POST endpoint which allows me to add a contact to the contact list.  I have a PUT endpoint which allows me to update the data for an existing contact, and I have a DELETE endpoint which allows me to delete an existing contact.  For each one of these endpoints, I will have a series of tests.  The tests will include both happy paths and error paths.  I'll verify that in each request, the response code is correct and the response body is correct.  For example, with the GET endpoint where I retrieve one contact, I'll verify that a GET on an existing contact returns a 200 response and the correct data for the contact.  I'll also verify that a GET on a contact that doesn't exist returns a 404 Not Found response.

User Interface (UI) Tests: This is where I will be testing in the browser, doing activities that a real user would do. A real user will want to fetch their list of contacts, add a new contact, update an existing contact, and delete a contact.  I will have one test for each of these activities, and each test will have a series of assertions.  To take one example, when I add a new contact, I will navigate to the new contact page, fill in all the form fields, and click the Save button.  Then I will navigate to the list page and verify that my new contact appears on the page.

Visual Tests: This is where I will verify that elements are actually appearing on the page the way I want them to.  I will navigate to the list page and verify that all of the columns are appearing on the page.  I will navigate to the add contact page and verify that all of the form fields and their labels are appearing appropriately on the page.  I will trigger all possible error messages (such as the one I would receive if I entered an invalid zip code), and verify that the error appears correctly on the screen.  And I will verify that all of the buttons needed to use the application are rendering correctly.

Security Tests: I will run security tests at both the Services layer and the UI layer.  I will test the API operations relating to authenticating a user, verifying that only a user with the correct credentials will be authenticated.  I will test every request endpoint to make sure that only those requests with a valid token are executing; requests without a valid token should return a 401.  For the UI layer, I will conduct a series of login tests that validate that only a user with correct credentials is logged in, and I will verify that I cannot navigate to the list page or the add contact page without being logged in.

Performance Tests: I will set benchmarks for both the server response time and the web page load time.  To measure the server response, I will add assertions to my existing Services tests that will verify that the response was returned within that benchmark.  To measure the web page load time, I will run a UI test that will load each page and assert that the page was loaded within the benchmark time.

Accessibility Tests:  I want to make sure that my application can be used by those with visual difficulties.  So I will run a set of UI and Visual tests on each page where I validate that I can zoom in and out on the text and that scroll bars appear and disappear depending on whether they are needed.  For example, if I zoom in on the contact list I will now need a vertical scrollbar, because some of the contacts will now be off the page.

With this series of automated tests, I will feel confident that I'll be able to deploy changes to my application and discover any problems quickly.

I've received a few questions over the last week about what percentage of total tests each of spokes in the Automation Test Wheel should have.  The answer will always be "It depends".  It will depend on these and many other considerations:

  • How many other services does your application depend on?  If it depends on many external services, you'll need more Component tests.
  • How complicated is your UI?  If it has just a page or two, you'll need fewer UI and Visual tests.  If it has several pages with many images, you'll need more UI and Visual tests.
  • How complicated is your data structure?  If you are dealing with large data objects, you'll need more Services tests to validate that CRUD operations are being handled correctly.
  • How secure does your application need to be?  An application that handles personal banking will need many more Security tests than an application that saves pictures of kittens.
  • How performant does your application need to be?  A solitaire game doesn't need to be as reliable as a heart monitor.

The beauty of the Automation Test Wheel is that it can be tailored to all types of software applications!  By considering each spoke in the wheel, we'll be sure that we are creating great automated test coverage.

6 comments:

  1. Simple example, good test ideas. For me, your explanation is great because it is about how to think during testing. That's the most important.

    ReplyDelete
    Replies
    1. Thanks, Ilya! I'm so glad you liked it! Talking about how to think about testing is my goal for this blog. :-)

      Delete
    2. I really like blog too. Reading about process automation solutions is interesting.

      Delete
  2. Hello Kristin,

    I had experience with more types of automation:
    - Integration / Data driven (a few units get exercised together to check predefined inputs and outputs)
    - "Monkey" automation, generating random inputs ( https://en.wikipedia.org/wiki/Monkey_testing ). Can reveal crashes, hangs or unexpected issues.

    I think the "wheel" model may have troubles with including more types of automation.

    ReplyDelete
    Replies
    1. Hi Podolyan! Data-driven and monkey tests are definitely methods of software testing. I would consider them methods rather than types, though. Integration/Data-driven testing could fit in the Component or Service test type, and Monkey automation could fit in the UI test type. The Automation Wheel focuses more on the "what" than the "how".

      Delete
  3. This comment has been removed by a blog administrator.

    ReplyDelete

Why The Manual vs. Automation Debate is Wrong

I don't generally editorialize in my blog- I prefer to focus on what to test rather than theories of testing- but I feel compelled to sa...