In my opinion there are two things wrong with the pyramid: it leaves out many types of automated tests, and it assumes that the number of tests is the best indicator of appropriate test coverage. I propose a new way of thinking about automated testing: the Automation Test Wheel.
Each of these test types can be considered as spokes in a wheel; none is more important than another, and they are all necessary. The size of each section of the wheel does not indicate the quantity of the tests to be automated; each test type should have the number of tests that are needed in order to verify quality in that area. Let's take a look at each test type.
Unit Tests: A unit test is the smallest automated test possible. It tests the behavior of just one function or method. For example, if I had a method that tested whether a number was zero, I could write these unit tests:
- A test that passes a zero to the method and validates that it is identified as a zero
- A test that passes a one to the method and validates that it is identified as non-zero
- A test that passes a string to the method and validates that the appropriate exception is thrown
Because unit tests are independent of all other services and because they run so quickly, they are a very effective way of testing code. They are often written by the developer who wrote the method or function, but they can also be written by others. Each method or function should have at least one unit test associated with it.
Component Tests: These tests check the various services that the code is dependent on. For example, if we had code that called the GitHub API, we could write a component test that would make a call to the API and verify that the API was running. Other examples of component tests are pinging a server or making a call to a database and verifying that a response was received. There should be at least one component test for each service the code relies on.
Services Tests: These are tests that check the web services that are used in our code. In today's applications, web services often use API requests. For example, if we have an API with POST, GET, PUT, and DELETE requests, we will want to have automated tests that check each request type. We will want to have both "happy path" tests that check that a valid request returns an appropriate response, and also negative tests that verify that an invalid request returns an appropriate error code.
User Interface (UI) Tests: UI tests verify that end-user activities work correctly. These are the tests that will fill out text fields and click on buttons. As a general rule, anything that can be tested with a unit, component, or service test should be tested by those methods instead. UI tests should focus solely on the user interface.
Visual Tests: Visual tests verify that elements are actually appearing on the screen. This is slightly different from UI tests, because the UI tests are focusing on the functionality of the user interface rather than the appearance. Examples of visual tests would be: verifying that a button's label is rendered correctly, and verifying that the correct product image is appearing on the screen.
Security Tests: These are tests that verify that security rules are being respected. These tests can overlap with services tests, but should still be considered separately. For example, a security test could check to make sure that an authorization token cannot be generated with an invalid username and password combination. Another security test would be to make a GET request with an authorization token for a user who should not have access to that resource, and verify that a 403 response is returned.
Performance Tests: Automated performance tests can verify that request response times happen within an appropriate time period. For example, if your company has decided that GET requests should never take longer than two seconds, test requests can be set to return a failure state if the request takes longer than that time. Web page load times can also be measured with performance tests.
Accessibility Tests: Automated accessibility tests can check a variety of things. When combined with UI tests, they can verify that images have text descriptions for the visually impaired. Visual tests can be used to verify that the text on the screen is the correct size.
You may have noticed that the above descriptions often overlap each other. For example, security tests might be run through API testing, and visual tests might be run through UI testing. What is important here is that each area is tested thoroughly, efficiently, and accurately. If there is a spoke missing from the wheel, you will never be comfortable relying on your automation when you are doing continuous deployment.
Next week, I'll discuss how we can fit all these tests into a real-world application testing scenario!
You may have noticed that the above descriptions often overlap each other. For example, security tests might be run through API testing, and visual tests might be run through UI testing. What is important here is that each area is tested thoroughly, efficiently, and accurately. If there is a spoke missing from the wheel, you will never be comfortable relying on your automation when you are doing continuous deployment.
Next week, I'll discuss how we can fit all these tests into a real-world application testing scenario!