In my opinion there are two things wrong with the pyramid: it leaves out many types of automated tests, and it assumes that the number of tests is the best indicator of appropriate test coverage. I propose a new way of thinking about automated testing: the Automation Test Wheel.
Each of these test types can be considered as spokes in a wheel; none is more important than another, and they are all necessary. The size of each section of the wheel does not indicate the quantity of the tests to be automated; each test type should have the number of tests that are needed in order to verify quality in that area. Let's take a look at each test type.
Unit Tests: A unit test is the smallest automated test possible. It tests the behavior of just one function or method. For example, if I had a method that tested whether a number was zero, I could write these unit tests:
- A test that passes a zero to the method and validates that it is identified as a zero
- A test that passes a one to the method and validates that it is identified as non-zero
- A test that passes a string to the method and validates that the appropriate exception is thrown
Because unit tests are independent of all other services and because they run so quickly, they are a very effective way of testing code. They are often written by the developer who wrote the method or function, but they can also be written by others. Each method or function should have at least one unit test associated with it.
Component Tests: These tests check the various services that the code is dependent on. For example, if we had code that called the GitHub API, we could write a component test that would make a call to the API and verify that the API was running. Other examples of component tests are pinging a server or making a call to a database and verifying that a response was received. There should be at least one component test for each service the code relies on.
Services Tests: These are tests that check the web services that are used in our code. In today's applications, web services often use API requests. For example, if we have an API with POST, GET, PUT, and DELETE requests, we will want to have automated tests that check each request type. We will want to have both "happy path" tests that check that a valid request returns an appropriate response, and also negative tests that verify that an invalid request returns an appropriate error code.
User Interface (UI) Tests: UI tests verify that end-user activities work correctly. These are the tests that will fill out text fields and click on buttons. As a general rule, anything that can be tested with a unit, component, or service test should be tested by those methods instead. UI tests should focus solely on the user interface.
Visual Tests: Visual tests verify that elements are actually appearing on the screen. This is slightly different from UI tests, because the UI tests are focusing on the functionality of the user interface rather than the appearance. Examples of visual tests would be: verifying that a button's label is rendered correctly, and verifying that the correct product image is appearing on the screen.
Security Tests: These are tests that verify that security rules are being respected. These tests can overlap with services tests, but should still be considered separately. For example, a security test could check to make sure that an authorization token cannot be generated with an invalid username and password combination. Another security test would be to make a GET request with an authorization token for a user who should not have access to that resource, and verify that a 403 response is returned.
Performance Tests: Automated performance tests can verify that request response times happen within an appropriate time period. For example, if your company has decided that GET requests should never take longer than two seconds, test requests can be set to return a failure state if the request takes longer than that time. Web page load times can also be measured with performance tests.
Accessibility Tests: Automated accessibility tests can check a variety of things. When combined with UI tests, they can verify that images have text descriptions for the visually impaired. Visual tests can be used to verify that the text on the screen is the correct size.
You may have noticed that the above descriptions often overlap each other. For example, security tests might be run through API testing, and visual tests might be run through UI testing. What is important here is that each area is tested thoroughly, efficiently, and accurately. If there is a spoke missing from the wheel, you will never be comfortable relying on your automation when you are doing continuous deployment.
Next week, I'll discuss how we can fit all these tests into a real-world application testing scenario!
You may have noticed that the above descriptions often overlap each other. For example, security tests might be run through API testing, and visual tests might be run through UI testing. What is important here is that each area is tested thoroughly, efficiently, and accurately. If there is a spoke missing from the wheel, you will never be comfortable relying on your automation when you are doing continuous deployment.
Next week, I'll discuss how we can fit all these tests into a real-world application testing scenario!
This comment has been removed by the author.
ReplyDeleteKristin,
ReplyDeleteThe reason the base of the pyramid (by natural geometry) for Unit Tests is so wide is due to the granularity of the tests themselves. There are more Unit Tests because of a tighter one-to-one relationship with the code. So if you do build Unit Tests to exercise single or small groupings of lines of code there should naturally be more of them as the code grows. The upper layers produce less tests (but at coarser level of focus) due to testing larger combined sections of the code. That is what a lot of people miss when viewing the Pyramid. So therefore a finer granularity of the test means more will be more of them created, and conversely the less granular there doesn't need to be as many.
But I do like the idea of your wheel concept, it causes us to consider and weigh all forms of testing with equality. It helps to keep focus of what we need to do and to keep the other types of test on the radar.
Regards,
Jim Hazen
Hi Jim- Yes, I totally understand the reason for the pyramid shape, and I certainly don't mean to suggest that it's no longer a valid concept. I just believe that testers may focus solely on the pyramid and miss other important tests, or focus too much on the quantity rather than the quality of the tests.
DeleteKristin,
DeleteI was just trying to help clarify things. As you have said and other peoples posts on this have stated what you present/propose is a good concept and changes the focus to all of the things (types of testing) we need to consider. I've met Mike Cohn (he is based in the same area as I am) and he stated the reason for building a foundation on Unit Tests is to improve the stability and reliability of the software under test before it goes down the line. He basically proposed a Shift-Left mindset many years ago.
My conference presentation called "Demystifying the Test Automation Pyramid" (STPCon 2016) talked about similar things as you, and I even presented a different view of concentric circles for the layers/levels of the Pyramid. But you did me one better by showing the pie slices of the different types of tests we need to perform and potentially automate. The pyramid only considers functional & regression type tests, you include things such as Security and Performance tests that are part of the overall testing effort for a project. I totally agree.
Jim
Hi Jim- I'd be interested in seeing the diagram you came up with! Can you share a link to your presentation or to the diagram?
DeleteHi Kristin!
ReplyDeleteNice idea. As you said, it´s important not to forget different types of tests apart from the "pyramid ones".
But I have one concern with "automation" and "visual/accessibility" tests. It is complicated to automate this kind of tests. Both types of tests should be done by manual testers.
Hi Arquillos- I totally agree that we will always need some level of manual testing. But automated visual testing is easier than you might think! Applitools integrates with all types of Selenium tests, and it's possible to focus on just one area of a screen (such as an image) for validation. You can even set the level of matching to less than an exact match, so if the pixels of an image match at a certain percentage level it is classified as a match.
DeleteHI very interesting article. Thanks for remaining the different types of testing. Just visit my blog https://softwaretestingboard.com/blogs/#axzz5bFeem8pn for more queries.
ReplyDeleteNice! At my company, we like to follow the pyramid but also consider a software quality wheel that we created as a visualization that helps us to take into account all of the different things to test. We wrote about it here: https://abstracta.us/blog/software-testing/the-software-testing-wheel/
ReplyDeleteWow, Kalei- it's so cool that your company thought to use a wheel as well! The blog post about it is very interesting. What a great way to think about the quality of software!
DeleteWow Kristin, it's great that you are rethinking the test automation pyramid.. however I am not sure about not allocating the size of each "pie" in the circle. I reckon we should assign the size based upon the context and test strategy otherwise we can easily go heavy on fragile GUI tests unknowingly. Awaiting your next in series to see how it fits in real-world application testing scenario!
ReplyDeleteHi Ashish- I'm glad you liked my post! I see your point about making sure that we don't rely too heavily on GUI tests by assigning a size to each section of the wheel. But I think that every application will be different. For example, e-commerce applications might need more visual testing because of the number of product pictures, whereas a messaging application might need fewer GUI tests than an e-commerce app, but more component tests because of all the related services it uses. I think that the test pyramid still serves a function for reminding us to shift left in our testing, whereas the wheel helps us remember what to test.
DeleteI do really like your idea! The pyramid is usually read from a functional perspective, but to my believe it should be applied for non-functional testing. But I think where your wheel is showing importance, the pyramid is showing how to divide the efforts (not number of tests needed...). So in a way these are two different things to me. But it definitely makes me think on how the pyramid could be improved to reflect non-functional aspects more explicitely
ReplyDeleteYes, I totally agree that the pyramid and the wheel are looking at two different things. The pyramid is thinking about "how" to automate, where as the wheel is thinking about "what" to test. Also interesting is the wheel that another reader shared above: https://abstracta.us/blog/software-testing/the-software-testing-wheel/. This wheel is more focused on what constitutes quality.
DeleteYour testing wheel is interesting idea - it represent nicely types of test you can perform while testing ones product. Nevertheless I have some concerns to imperatives you use in your article. "Each of these test types can be considered as spokes in a wheel; none is more important than another, and they are all necessary". What about company which is just starting coding journey and their least concern is security? Is testing in this area really necessary? You say "Each method or function should have at least one unit test associated with it." What about decoupling of production code and Unit tests? While you stick to these rule you have a really strong, "concrete" like, connection. I think that during formulating such statements we should really describe Context which we are in.
ReplyDeleteHi JPTestIT, thanks for your comments! You are right that the spokes in the wheel will not always be necessary. For example, a couple of years ago my team was working on APIs that other teams would be using; we didn't have any UI. Therefore, we didn't need to do UI testing. But for finished apps, all of these test types are necessary.
DeleteI'm not sure what you mean by "decoupling of production code and unit tests". Can you explain that for me?
In a world of agile and DevOps with faster time to market how do you justify the wheel especially connecting the dots to a sprint.
ReplyDeleteHi Akshaj- this is an interesting question. In a world of Agile and DevOps, teams really HAVE to adopt automated testing. Unless a company wants to employ several dozen manual testers who run through the same regression suites every sprint, setting up automated tests is crucial. It could be that a product owner or team manager doesn't think there is time for writing some types of automation, like security or performance tests. If that's the case, then testers can work on shifting some of their existing UI tests to Services tests so they will run faster and be far less flaky. Then the time saved by not having to rerun flaky UI tests can be used to set up security and performance tests. As for connecting this work to a sprint, what my team does is create automation stories that go on the backlog. Whenever we have free time (and this often happens at the beginning of a sprint while we are waiting for new features from the developers), we take an automation story and pull it into the sprint. I hope this helps!
DeleteThis comment has been removed by a blog administrator.
ReplyDeleteHi Dear,
ReplyDeleteI like Your Blog Very Much..I see Daily Your Blog ,is A Very Useful For me.
Test automation framework and strategy is a comprehensive set of guidelines used to produce beneficial results of the regression testing activity on Testframework.io
Visit Here - https://www.testframework.io/home/test-framework/
I simply wanted to write down a quick word to say thanks to you for those wonderful tips and hints you are showing on this site.… I love to read your Software QA services articles because your writing style is too good, its is very helpful for all of us and I never get bored while reading your article because, they are becomes a more and more interesting from the starting lines until the end.
ReplyDeleteNice blog has been shared by you. before i read this blog i didn't have any knowledge about this but now I got some knowledge so keep on sharing such kind of an interesting blogs.
ReplyDeleteCheck out the following links to hire our testing services:-
Certified Software Testing Company in India
Printers are quintessential Devices at work environments. There are different suppliers of printers that help you print your records into printed copies. Notwithstanding, while at the same time doing this, you may coincidentally find various issues. Printer not activated error code 30 As you definitely realize that printer blunders are nonexclusive and one of those is – printers not enacted error code 30 sage PDF converter or printer not initiated error code 30 Cantax. This printer not enacted blunder code 30 enliven commonly caused when the printer isn't arranged as expected. For point-by-point data, keep perusing this post.
ReplyDeleteThe article you have shared with us is really informative and interesting. I really appreciate you that you shared this valuable information with us. Thank you. for More Information Click Here:- HP Laptop Touchpad Not Working Problems
ReplyDeleteThank you for this Valuable Content. React Native institute in bangalore
ReplyDelete