Email Subscription Form

Saturday, October 28, 2017

Think Like a Tester

Beginning with this week's post, my blog will be taking on a new focus!

I have renamed it from Fearless Automation to Think Like a Tester (for the moment, the URL will remain the same). There were three recent events that made me decide to shift my focus:
  • I attended a large international computing conference where there was not a single workshop or presentation focused on software testing. 
  • At this conference, I met computer science students who asked me if there were any college classes to learn to be a tester.
  • I interviewed a QA engineer who was able to create a great automated testing solution for a website, but could not think of simple manual tests for the site. 
All of these things made me realize the following:
  • There aren't enough people talking about testing software
  • There aren't enough resources to learn about testing software
  • The testing community has been focused for so long on how to test software that we haven't been thinking about what to test and why we are testing it 
Testing is truly a craft, and one that requires a different skill set from software development:
  • Rather than thinking of ways to make software work, testers think of ways to make software break
  • Rather than designing things to go right, testers think of all the ways that things can go wrong
  • Rather than focusing deeply on one feature, testers focus on how all those features integrate
  • Rather than solving a problem and moving on, testers come up with ways to continually verify that features are working 
In the weeks and months to come, I will be getting back to basics and discussing all areas of software testing- manual and automated- that require thinking like a tester. Hopefully both testing newbies and seasoned testers alike will find this knowledge helpful!

Thursday, October 19, 2017

API Testing vs. UI Testing

Recently someone asked me “If you have API testing, you don’t need UI testing, right?”  I said “No, because you need to have tests that make sure that elements such as buttons are present and working correctly.”  

Then he asked, “Then if you have UI testing, you don’t need API testing?”  I said, “No, because UI tests tend to be slow and flaky.  You can get more tested in less time with API testing.”

Inspired by that conversation, I thought I’d share my thoughts on when you should do API testing and when you should do UI testing.

First, test as much as you can with API testing.  Take a look at all of your possible endpoints and  create a suite of tests for each.  Be sure to test both the happy path and the possible error paths.  On every test, assert that you are getting the correct response code.  For GET requests, assert that you receive the correct results.  If there are filtering parameters you can pass in with the request, be sure to test scenarios with and without those parameters.  For POST, PUT, and PATCH tests, test that the changes you made have been written to the database; you can do this with a GET.  Be sure to test scenarios where you are entering invalid data; assert that any message returned in the body of the response is the correct message.  For DELETE requests, test that the resource has been deleted from the database; this can be verified with a GET. 

Once you have tested all the scenarios you can think of with API testing, then it’s time to think about UI testing.  First consider your most common user story.  For example, if you are testing an address book, the most likely scenario for a user would be adding in a new address.  You could create a UI test that would navigate to the address book, click a button to add a new address, add the address, save it, and then search the address book to verify that it has been saved.  

Now that your most common user story has been added, you have probably touched a number of the elements that you would want to verify in your UI.  Next, think about other elements on the page that you might want to check.  For instance, there may be a cancel button on the page where you are adding a new address.  A cancel button cannot be tested with an API test; therefore, you should add in a UI test for it.  Another example would be an error message that appears to the user; you may want to add in a test where you try to add an invalid address, and verify that the correct error message is displayed.  

Once you have tests that verify all of the important elements on your page, you can stop writing UI tests.  It’s not necessary to create lots of scenarios where each field is validated for various incorrect entries, because a) you already created those scenarios in your API tests, and b) you already have one UI test that verifies that the error message is displayed.  

If you already have an automated suite of UI tests, it may be a good idea to take a look at your tests and see which scenarios could be covered by API testing.  Converting your UI tests to API tests will make your regression suites faster and more reliable!

Thursday, October 12, 2017

What the Sinking of the Vasa can Teach Us About Software Development

In Stockholm, Sweden, there is a museum that displays the ship called the Vasa, which sank on its maiden voyage in 1628.  I’ve never been there, but I’ve heard that the museum is fascinating for both architectural and historical reasons.  The Vasa took three years to build, and was supposed to be the flagship for Sweden’s growing navy.  The ship was built with 72 guns on two decks, and was adorned with elaborately painted carvings to reflect its majesty. 

On the day of its maiden voyage, in full view of thousands of people, including ambassadors from other countries, the ship sailed only 1400 yards before tilting, capsizing, and sinking.  It was a calm day, but a simple gust of wind caused the ship to list too much to one side, and water began pouring in through the gunports.  The primary reason for the loss of the Vasa was the simple fact that the ship’s center of gravity was too high.  How did this crucial error happen?  The answers can be helpful to us even 400 years later!

Make sure you have solid, updated plans

The shipwright in charge of building the Vasa became seriously ill (and eventually died) in the beginning stages of the project.  His assistant was in charge of completing the project, which had changed significantly since its inception.  After the initial plans were drawn, the number of guns it was expected to carry doubled, and the length of the ship was increased from 111 feet to 135 feet.  Yet the shipwright’s assistant never created a new set of plans to account for these changes.
Our lesson today: Working in an agile environment means the specifications for our software projects will frequently change.  We need to be mindful of this, and remember to re-evaluate our plans and communicate them clearly to the entire team. 

Communicate with other teams

Archeologists who have studied the Vasa and the remains of the wreckage discovered that one team of builders was using Swedish rulers, which had the modern-day 12 inches in a foot, while another team was using Amsterdam rulers, which only had 11 inches in a foot.  This resulted in the ship’s mass being distributed unevenly, compounding the problem of the high center of gravity.
Our lesson today: Most of us don’t enjoy having meetings and writing documentation, but they can be crucial in making sure that we are all on the same page.  We don’t want to waste time accidentally duplicating another team’s work, or using the wrong version of our tools.

Pay attention to your test results

Shortly before the Vasa’s first and final voyage, the captain supervising construction of the ship arranged for a demonstration of the ship’s stability.  He had thirty men run back and forth across the deck.  He stopped the test after the men had crossed the deck just three times, because the ship was rocking so much he feared it would capsize!  Rather than conduct further tests, plans continued for the launch. 
Our lesson today: Test results that don’t show us what we want to see can be disheartening, but to see a software release launch and fail feels even worse!  It’s important that testers keep digging when we see results that are different from what we expected, and it’s important that we listen to what our testers are telling us, even when it’s bad news. 

Learning about the Vasa made me marvel at just how much engineering principles have remained the same over hundreds of years.  Even though our projects are built from code rather than timber, the fundamental principles of having solid plans, communicating with everyone in the project, and getting valuable feedback through testing are still crucial to creating a great product. 

Friday, October 6, 2017

What "Passengers" Can Teach Us About Quality Assurance

Last weekend, I watched the movie Passengers. The basic plot of the movie is that two passengers in hibernation on a flight from Earth to another planet are awakened ninety years too early. As a QA engineer the movie got me thinking about two valuable lessons for developing and testing software.

Lesson One: “And Yet It Did”
In Passengers, when Jim’s hibernation pod fails, he tells the ship’s computer, the android bartender, and even another human what has happened. The response of all three is “Well, that’s impossible. The hibernation pods never fail.” Jim’s response is “Then how do you explain the fact that I’m here?” Many times in my testing career I have been told by developers that the behavior I am observing in our software is impossible. And I always respond with “And yet, here is the behavior that I’m seeing”. In one particular instance at a previous company, I was testing that information entered into the third-party software we integrated with was making it into our software. This testing was going well, until one entry didn’t travel to our software. I told the developer about it. He said, “That’s impossible. I’ve tested this, and you’ve been testing this for days.” I said, “Yes, and yet, this record wasn’t saved.” He said, “Look at the logs- you can see that the information was sent.” I said, “Yes, and yet, it wasn’t saved.” He said, “I entered more information just now, and you can see that it was saved.” I said, “Yes, and yet, the information I entered was not saved.” After much investigation, it was finally discovered that there was a bug where the database was not saving any record after the 199th record. Because I was testing in a different table than he was, and he didn’t have as many records, he didn’t see the error. The moral of the story: Even if something is impossible, it might still happen.

Lesson Two: “But What If It Did?”
One of the scariest parts of Passengers for me was that there was no way for Jim to reboot his hibernation pod and return to hibernation. Also, there were no spare pods. Even worse, there was no way for him to wake up the captain or any human who could help him. I found myself yelling at the screen, “How is this even possible? Why didn’t they put in contingency plans?” The answer, of course, is that the designers of the system were SO SURE that nothing could ever go wrong with the pods. But something did go wrong, and due to their false confidence there was no way to make it right. QA engineers are always thinking about all the possible ways that software can fail. I have often heard the argument “But no sane user would do that.” And I always respond with “But what if they did?” While we may not have time to account for every possible way that our software might fail, we should create appropriate responses for as many ways as we can, and log the rest for future fixes.

I like to think that somewhere on Earth in the Passengers universe, a QA engineer is saying to her product owners at the spaceship design company, “See, I TOLD you the pods could fail!”

New Blog Location!

I've moved!  I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted.  So I&#...