Anyone who has ever written an automated test has experienced test flakiness. There are many reasons for flaky tests, including:
- Environmental issues, such as the application being unavailable
- Test data issues, where an expected value has been changed
- UI issues, such as a popup window taking too long to appear
All of these reasons are valid explanations for flaky tests. However, they are not excuses! It should be your mission to have all of your automated tests pass every single day, except of course when an actual bug is present.
This is important not just because you want your tests to be reliable; it's important because when you have flaky tests, trust in you and in your team is eroded. Here's why:
Flaky tests send the message that you don't care
Let's say you are the sole automation engineer on a team, and you have a bunch of flaky tests. It's your job to write test automation that actually checks that your product is running correctly, and because your tests are flaky, your automation doesn't do that. Your team may assume that this is because you don't care about whether your job is done properly.
Flaky tests make your team suspect your competence
An even worse situation than the previous example is one where your team simply assumes that you haven't fixed the flaky tests because you don't know how. This further erodes their trust in you, which may spill over into other testing. If you find a bug when you are doing exploratory testing your colleagues might not believe that you have a bug, because they think you are technically incompetent.
Flaky tests waste everyone's time
If you are part of a large company where each team contributes one part of an application, other teams will rely on your automation to determine whether the code they committed works with your team's code. If your tests are failing for no reason, people on other teams will need to stop what they are doing and troubleshoot your tests. They won't be pleased if they discover that there's nothing wrong with the app and your tests are just being flaky.
Flaky tests breed distrust between teams
If your team has a bunch of flaky tests that fail for no good reason, and you aren't actively taking steps to fix them, other teams will ignore your tests, and may also doubt whether your team can be relied upon. In a situation like this, if Team B commits code and sees that Team A has failing tests, they may do nothing about it, and may not even ask Team A about the failures. If there are tests that fail because there are real issues, your teams might not discover them until days later.
Flaky tests send a bad message your company's leadership
There's nothing worse for a test team than to have test automation where only 80% (or less) of the tests pass on a daily basis. This sends a message to management that either test automation is unreliable, or you are unreliable!
So, what can we do about flaky tests? I'd like to recommend these steps:
1. Make a commitment to having 100% of your tests pass every day. The only time a test should fail is if a legitimate bug is present. Some might argue that this is an impossible dream, but it is one to strive for. There is no such thing as perfect software, or perfect tests, but we can work as hard as we can to get as close as we can to that perfection.
2. Set up alerts that notify you of test failures. Having tests that detect problems in your software doesn't help if no one is alerted when test failures happen. Set up an alert system that will notify you via email or chat when a test is failing. Also, make sure that you test your alert. Don't assume that because the alert is in place it is automatically working. Make a change that will cause a test to fail and check to see if you got the notification.
3. Investigate every test failure and find out why it failed. If the failure wasn't due to a legitimate bug, what caused the failure? Will the test pass if you run it again, or does it fail every time? Will the test pass if you run it manually? Is your test data correct? Are there problems with the test environment?
4. Remove the flaky tests. Some might argue that this is a bad idea because you are losing test coverage, and the test passes sometimes. But this doesn't matter, because when people see that the test is flaky they won't trust it anyway. It's better to remove the flaky tests altogether so that you demonstrate that you have a 100% passing rate, and others will begin to trust your tests.
An alternative would be to set the flaky tests to be skipped, but this might also erode trust. People might see all the skipped tests and see them as a sign that you don't write good test automation. Furthermore, you might forget to fix the skipped tests.
5. Fix all the flaky tests you can. How you fix the flaky tests will depend on why they are flaky. If you have tests that are flaky because someone keeps changing your test data, change your tests so that the test data is set up in the test itself. If you have tests that are flaky because sometimes your test assets aren't deleted at the end of the test, do a data cleanup both before and after the test.
6. Ask for help. If your tests are flaky because the environment where they are running is unreliable, talk to the team that's responsible for maintaining the environment. See if there's something they can to do solve the problem. If they are unresponsive, find out if other teams are experiencing the issue, and lobby together to make a change.
7. Test your functionality in a different way. If your flaky test is failing because of some element on the page that isn't loading on time, don't try to solve the issue by making your waits longer. See if you can come up with a different way to test the feature. For example, you might be able to switch that test to an API test. Or you might be able to verify that a record was added in the database instead of going through UI. Or you might be able to verify the data on a different page, instead of the one with the slow element.
Some might say that not testing the UI on that problematic page is dangerous. But having a flaky test on this page is even more dangerous, because people will just ignore the test. It would be better to stick with an automated test that works, and do an occasional manual test of that page.
Quality Automation is Our Responsibility
We've all been in situations where we have been dismissed as irrelevant or incompetent because of the reputation of a few bad testers. Let's create a culture of excellence for testers everywhere by making sure that EVERY test we run is reliable and provides value!