Email Subscription Form

Saturday, May 30, 2020

Book Review: Perfect Software and Other Illusions About Testing

"Perfect Software and Other Illusions About Testing", by Gerald Weinberg, is the best book on testing I have ever read.  It is a must-read for anyone who works with software: CEOs, CTOs, scrum masters, team leads, developers, product owners, business analysts, and software testers.

Before I get into why this book is so great, I'll first acquaint you with the author.  Gerald "Jerry" Weinberg (1933-2018) was involved in the creation of software for over fifty years.  Early in his career he worked for NASA on Project Mercury, the project that created spacecraft that allowed a human to orbit the earth.  For decades he consulted with companies about building quality software, and over those years he gained a great deal of wisdom about software testing.  "Perfect Software", which was published in 2014, seems to me to be the culmination of his years of experience.


The book is divided into several chapters, each of which looks at a particular aspect of software testing. Many examples are given from Jerry's consulting experience, and each chapter closes with a summary and a list of common mistakes that companies make.  Rather than summarizing the lessons he imparts, I think it would be best to include Jerry's own words here.  Here are some of my favorite quotes from the book:

"Before you even begin to test, ask yourself: What questions do I have about this product's risks?  Will testing help answer these questions?"

"There are an infinite number of possible tests...Since we can't test everything, any set of real tests is some kind of sample- a portion, piece, or segment that is in some way representative of a whole set of possible tests."

"Knowing about the structure of the software you're testing can help you to identify special cases, subtle features, and important ranges to try- all of which can help narrow the inference gap between what the software can do and what it will do during actual use."

"Testing gathers information about a product; it does not fix things it finds that are wrong."

"If you're going to ignore information or go ahead with predetermined plans despite what the tests turn up, don't bother testing."

"If you blame messengers for bringing news you don't want to hear, you'll be rewarded by not hearing the news you should hear."

"Quality is a product of the entire development process.  Poor testing can lead to poor quality, but good testing won't lead to good quality unless all other parts of the process are in place and performed properly."

"Testing starts at project conception, or before. If you don't know this, you don't understand testing at all."

"Without a process that includes regular technical reviews, no project will rise above mediocrity, no matter how good its machine-testing process."

"No developer is good enough to consistently do it alone, and do it right."

"Data are meaningless until someone determines their meaning.  Different people give different meanings to the same data.  Gather data, then sit down and ponder at least three possible meanings."

"When someone says, 'The response should be very fast', what does that mean, exactly?  What meanings do 'should', 'very', and 'fast' give to the stated information?"

"Numbers can be useful, but only if they're validated by personal observation and set in context by a story about them."

"Garbage arranged in a spreadsheet is still garbage."

Jerry uses many great hypothetical scenarios to illustrate his points, and he also uses real-world examples from his years of consulting.  Here are some of my favorites:

  • The tester who didn't log a bug he found because it wasn't in "his" component
  • The manager who thought that the project was ready to ship because they ran 600,000 test cases and "nothing crashed the system"
  • The team who thought their biggest problem was their bug-tracking system, because the system couldn't handle their 140,000 open bugs
  • The team who took so long to triage bugs that couldn't make a decision on any of them, resulting in 129 undiscussed and unfixed bugs
  • The tester who assumed that her new automated test tool was working correctly because all the tests displayed in green at the end
  • The developer-tester team who were gaming the bug bounty system by having the developer add bugs to the code, the tester find the bugs quickly, and the developer fix them just as quickly, resulting in rewards for both
  • The VP of Development who wanted a really big written test plan so he could have something big to slam down on a desk to "prove" that they had tested well

If you would like to think about what role testing plays in your software development project, what constitutes a good test, how to plan testing for a project, or how to interpret test data in order to make management decisions, then "Perfect Software" is the book for you.  I plan to re-read this book every year to make sure that I have fully retained all the lessons it offers.

Saturday, May 23, 2020

Rarely Used HTTP Methods

A couple of months ago, one of the developers I work with asked me to test a bug fix he'd done.  In order to test it, I'd need to make an HTTP request with the OPTIONS method.  I'd never heard of the OPTIONS method, and it got me thinking: what other HTTP methods did I not know about?  In this post, I'll talk about four rarely used methods and and how you might use them in your testing.

OPTIONS:
This method returns whatever methods are allowed for a particular endpoint.  For example, if you had a URL called http://cats.com/cat, and you could use it to get a list of cats or add a cat, the methods that the OPTIONS request would return would be GET and POST.

OPTIONS demo:
Let's use the Restful-Booker API to try out the OPTIONS method.  Assuming you have Postman installed, we'll create a new GET request that calls this URL: https://restful-booker.herokuapp.com/booking. When you run this request, you'll see a list of all the available hotel-bookings for the app in the response body. Now let's change the method from GET to OPTIONS. When you run this request, you'll see GET, HEAD, and POST in the response body. These are the three methods that are available for this endpoint.

Why would you use the OPTIONS method?
If you are testing an API, this is a great way to find out if there are any valid endpoints that you don't know about. This can reveal more features for you to test, or it could potentially reveal a security hole. For example, maybe your API shouldn't really have a DELETE method, but someone implemented it by mistake.

HEAD:
This method returns only the headers of the response to a GET request. It's used if you want to check the response headers without putting pressure on the server to return other data.

HEAD demo:
We'll use the very same URL that we used for our OPTIONS demo. First, let's return our method to GET. Run the request, and see that you get a response body with the list of available bookings. Take a look at the headers that were returned with the response: Server, Connection, X-Powered-By, Content-Type, Content-Length, Etag, Date, and Via. Now let's change our verb to HEAD and re-run the request. We won't get anything in the body of the response, but we will get those same eight headers.

Why would you use the HEAD method?
This method would be a great way to check the headers of a GET response without having to actually return data. Headers are important because they often help to enforce security rules. If you know what headers your API should be returning, you can run this request on all of your endpoints to make sure that the right headers are being used.

CONNECT:
This method establishes a tunnel to the server that is identified by a URL. It's often used for proxy connections.

CONNECT demo:
For this demo, you'll need to have curl enabled. You can check to see if curl is installed on your machine by typing curl --version in your command line window. If you get a version back, you have curl installed.

To try out CONNECT, type this command into your command line window: curl -v CONNECT http:///kristinjackvony.com. Take a look at the response you get; about nine lines from the bottom, you'll see the message "301 Moved Permanently". This is because I recently changed this domain name to point to my Thinking Tester webpage instead of my personal webpage. I didn't do that because of this tutorial, but it wound up being useful!

Why would you use CONNECT?
You'd use CONNECT any time you want to see exactly what happens when you try connecting to an HTTP resource. This could be helpful with security testing, and any time you are using a proxy.

TRACE:
This method is similar to CONNECT in that it connects to a resource, but it also tries to get a response back.

TRACE demo:
We'll use curl again to try out TRACE. Type this command in the command window: curl -v TRACE http://isithalloween.com. You'll get back some response headers, plus the source code for the page.

Why would you use TRACE?
This would be good for security testing. Because you get the source code for the page in the response, you can inspect it to see if there are any cookies or authentication headers that a malicious user could exploit.

I hope you've gotten some good testing ideas from these rarely used HTTP methods! In my research I found all kinds of other methods that appear to no longer be in use, such as COPY, LINK, UNLINK, LOCK, and UNLOCK. Have you ever used these, or other rare methods? Tell me about it in the comments!

Saturday, May 16, 2020

Seven Steps to Solve Any Coding Problem

I am not the world's greatest coder, although I am getting better every year.  One thing that I'm really improving on is my ability to solve coding problems.  I'm not talking about those coding challenges that you can get online or in a job interview; I'm talking about those real-world problems, like "How are we going to create an automated test for this?"  Here are the seven steps I use to solve any coding problem. 


Step One:  Remember what problem you are trying to solve

When you're trying to figure out how to do something, it can be easy to forget what your original intent was.  For example, let's say you are trying to access a specific element on a web page, and you're having a really tough time doing so; perhaps the element is in a popup that you can't reach, or it's blocked by something else.  It's easy to get so bogged down in trying to solve this problem that you lose sight of what your original intent was- to add a new user to the system.  When you remember this, you realize that you could actually add a new user to the system by calling the database directly, avoiding the whole issue that you were stuck on!

Step Two: Set Small Steps

I often have what I want to do in my code all figured out long before I know how I'm going to it.  And I used to just write a whole bunch of code even if I wasn't sure it was all going to work correctly.  Then when I tried to run the code, it didn't work; but I wrote so much code that I didn't know whether I had one problem or many.  This is why I now set small steps when I code.  For example, when I was trying to write the email test that I mentioned in last week's post, I first set myself the goal of just reaching the Gmail API.  I didn't care what kind of token I used, or what information I got back; I just wanted a response.  Once I had solved that, then I worked on trying to get the specific response that I wanted.  This strategy also keeps me from getting frustrated or overwhelmed.

Step Three: Change One Thing at a Time

This step is similar to Step Two, but it's good for those times when your code isn't working.  It's very tempting to thrash around and try a number of different solutions, sometimes all at once, but that's not very helpful.  Even if you get your code to work by this method, you won't know which change it was that caused the code to work, therefore you don't know which changes were superfluous.  It's much better to make one small change, see if it works, remove that change and try a different change, and so on.  Not only will you solve your problem faster this way, but you'll be learning as you go, and what you learn will be very valuable for the next time you have a problem.

Step Four: Save All Your Work

I learned this one the hard way when I was first writing UI automation.  I had absolutely no idea what I was doing, and sometimes I'd try something that didn't completely work and then delete it and try something else.  Then I'd realize that I needed some of the lines of code from the first thing I tried, but I had deleted them, so I had to start from scratch to find them again.  Now when I'm solving a new coding challenge I create a document that I call my scratch pad, and when I remove anything from my code I copy it and paste it in the scratch pad, just in case I'll need it again.  This has helped me solve challenges much more efficiently.

Step Five: See What Others Have Done

People who are good at solving coding problems are usually also masters of Google Fu: the art of knowing the right Google search to use to get them the answers they need.  When I first started writing test automation, I was not very good at Google Fu, because I often wasn't sure of what to call the thing I wanted to do.  As I've grown in experience, I've become better at knowing the terminology of whatever language I'm using, so if I've forgotten something like whether I should be using a static method I can structure my search so I can quickly find the right answer.  The answers you find on the Internet are not always the right ones, and sometimes they aren't even good ones, but they often provide clues that can help you solve your problem.

Step Six: Level Up Your Skills

As I mentioned in this post, I've been taking a really great Node.js course over the last three months.  I'm not even halfway done with it yet, and I've already learned so much about Node that I didn't understand before.  Now that I understand more, writing code in Node.js is so much easier.  Rather than just copying and pasting examples from someone on Stack Overflow, I can make good decisions about how to set things up, and when I understand what's going on, I can write code so much faster.  Take some time to really learn a coding language; it's an investment that will be worth it!

Step Seven: Ask For Help

If you've finished all your other steps and still haven't solved your problem, it's time to ask for help.  This should definitely not be Step One in your process.  Running for help every time something gets hard will not make you a better coder.  Imagine for a while that there's no one who can help you, and see how far you can get on your own.  See what kind of lessons you can learn from the process.  Then if you do need to ask for help, you'll be able to accurately describe the problem in such a way that your helper will probably be able to give you some answers very quickly.  You'll save them time, which they will appreciate.

Coding is not magic: while there are all sorts of complex and weird things out there in the world of software, an answer exists for every question.  By using these seven steps, you'll take some of the mystery out of coding and become a better thinker in the process!

Saturday, May 9, 2020

Testing Email Without Tears

Several years ago, when I was first learning test automation, I needed to create a test for my company's email service.  I had configured the service to deliver an email every day, and I wanted an automated test that would check my test Gmail account and determine if the email had been delivered.  At the time, the only automated testing I knew about was Selenium Webdriver with Java.  So I wrote an automated test that would open a browser, navigate to the Gmail client, log in, and search the page for the email.

This test didn't work out very well.  First of all, there could be a delay of up to ten minutes before the email was delivered, so it wound up being a long-running test.  Secondly, any time Google made changes to the email page, I had to update my element locators.  And finally, I didn't have a good way to identify the email, so sometimes the test would think that yesterday's email was today's and mistakenly pass the test.

So when I recently found myself with the need to test an email delivery again, I knew there had to be a better way!  This time I created an automated test using the Gmail API, and I'll share here how I did it.


The first step is obviously to obtain a Gmail account to test with.  You will not want this to be your personal Gmail account!  I already had a test account that is shared with a number of other testers at my company.

The trickiest part of using the Gmail API is coming up with an access token to use for the API requests.  Using this post by Martin Fowler, this blog post, this Quickstart documentation from Google, and some trial and error, I was able to obtain a refresh token that could be used to request the access token.  The Gmail API Quickstart application is easy to create, and can be done in a number of different languages, such as .NET, Java, NodeJS, Python, and Ruby.  You just choose which language you want to use and follow the simple steps.

Once the Quickstart application has been created, you run it.  When the application runs, it will prompt you to authenticate your Gmail account and give permission for the Gmail API to access the account.  After this is completed, you'll have a token.json file that contains a refresh token and a credentials.json file that contains a client id, a client secret, and a redirect URI.

I ran the Quickstart application in .NET, but I didn't actually want my test to be in .NET.  I wanted to write my test in Powershell.  For those unfamiliar with Powershell, it's a Windows command line language that offers more advanced commands than the traditional command line.  I took the refresh token, client id, client secret, and redirect URI from the Quickstart application files and created this request body:

$RefreshTokenParams = @{
client_id=$clientId;
client_secret=$secret;
refresh_token=$refreshToken;
grant_type='refresh_token';
}

Then I used this request to create a refreshed token:

$RefreshedToken = Invoke-WebRequest -Uri "https://accounts.google.com/o/oauth2/token" `
-Method POST -Body $RefreshTokenParams | ConvertFrom-Json

The refreshed token contained the access token I needed, so I grabbed it like this:

$AccessToken = $RefreshedToken.access_token

Now I had the token I needed to make requests from the Gmail API. Note that the refresh token I got from the Gmail Quickstart application won't last forever; in the event that it gets revoked at some point in the future, I can simply run the Quickstart application again and I'll have a new token to use in my script.

Next, I added a command in my script to send an email. I can do this with a simple POST request using my team's email function; how you create an email for testing will of course vary.

Then I created the request to the Gmail API:

$header = @{
    Authorization = "Bearer $AccessToken"
}

$emailList = Invoke-RestMethod `
-Uri 'https://www.googleapis.com/gmail/v1/users/<emailaddresshere>/messages' `
-Method 'GET' -Header $header

The <emailaddresshere> was of course replaced by my test email address.

This request got me a list of the twenty-five most recent emails to my test account. I grabbed just the first ten of them, then I looped through those ten to find the email that matched the one I sent.

You may be wondering at this point how I was able to tell my latest email apart from all the other emails. I did this by creating a random GUID and including that GUID at the very beginning of the email message. The Gmail client saves the first several characters of an email message as a "snippet", and as I looped through the ten emails I saved, I looked for the GUID in each snippet. When I found a match, I was able to programmatically examine that email to see if it had the attachment I was expecting.

Of course, emails are not delivered instantaneously, even when we're checking the API rather than logging into the client on the browser. So I built in some waits and retries to make sure that my test didn't fail simply because the email hadn't been delivered yet. So far, waiting thirty seconds has been enough to ensure that the email has been delivered, meaning my test takes well under a minute; much faster than that UI test I created years ago!

The moral of this story is not just that testing email is easier and more reliable with an API test than a UI test; it's also that APIs are great to test all kinds of things! The next time you find yourself needing to access a third-party application for an automated test, see if that app has an API. Your test will be less flaky, so you won't have to waste lots of time rerunning and debugging it!

Saturday, May 2, 2020

Six Testing Personas to Avoid

If you are working for a company that makes software for end users, you have probably heard of user personas.  A user persona is a representation of one segment of your application's end users.  For example, if you worked for a company that made a website for home improvement supplies, one of your user personas might be New Homeowner Nick, who has just purchased his first home and might not have much experience fixing small things in his house.  Another persona might be Do-It-Yourself Dora, who has lots of experience fixing everything in her home herself.

It occurred to me recently that there are also testing personas.  But unlike our user personas, these personas are ones we want to avoid!  Read on to see if one of these personas applies to you.


1. Test Script Ted
Ted loves running manual test scripts and checking them off when they're completed.  It gives him a feeling of satisfaction to see tests pass.  He doesn't particularly care if he doesn't understand how his application works; he's just satisfied to do what he's told.  But because he doesn't understand how the application works, he sometimes misses important bugs.  If he sees something strange, but it's not addressed in the test plan, he just lets it slide.  His job is to test, not figure things out!

2. Automation Annie
Annie considers herself an automation engineer.  She considers manual testing a colossal waste of her time.  She'd rather get into the hard stuff: creating and maintaining automated tests!  When a new feature is created, she doesn't bother to do any exploratory testing; she'll just start coding and she figures her great automation will uncover any issues.

What Ted and Annie have in common:
Ted and Annie are making the same mistake for different reasons; they are not taking the time to really learn how their application works.  They're both missing bugs because of a lack of understanding; Ted doesn't understand the code that makes the features work, and Annie doesn't understand the use cases of the application.

How not to be Ted or Annie:
To be a thorough tester, it's important to take the time to understand how your features work.  Try them out manually; explore their limits.  Look in the code to see if there are other ways you might test them.  Ask questions when you see things that don't make sense.

3. Process Patty
Patty is passionate about quality.  She likes things to work correctly.  But she likes having processes and standards even more!  She's got test plans and matrices she's expecting her team to follow to the letter.  Regression testing must be completed before any exploratory testing is done, and there are hundreds of regression tests to be run.  The trouble is, with releases happening every two weeks there's no time to do any exploratory testing.  There's no time to stop and think about new ways to test the product, or what might be missing.  The team needs to get all those regression tests completed!

4. Rabbit Hole Ray
Ray is passionate about quality too; he doesn't want any bug to go unnoticed.  So when he sees something strange in the application when it runs on IE10, he's determined to find out what's wrong!  He will take days to investigate, looking at logs and trying different configuration scenarios to reproduce it. He doesn't want to be bothered with the standard regression tests that he's leaving undone as the feature is being released. And he doesn't care that only 1% of their customers are using IE10.  He's going to solve the mystery!

What Patty and Ray have in common:
Patty and Ray are both wasting time.  They are focused on something other than the primary objective: releasing good software on time with a minimum of defects.  Patty is so caught up in the process that she doesn't see the importance of exploratory testing, which could find new bugs.  And Ray is so obsessed with that elusive bug he's exploring that he's ignoring important testing that would impact many more users.

How not to be Patty or Ray:
When testing a new feature or regression testing existing ones, it's important to think about which tests will have the biggest impact and plan your testing accordingly.  Be careful not to get too caught up in processes, and if that elusive bug you're searching for won't be that impactful to end users, let it go.

5. Job Security Jim:
Jim's been working at his current position for years.  He knows the application like the back of his hand.  He's the go-to guy for all those questions about how the most ancient features behave.  He knows there's no way the company will let him go; he knows too much!  So he doesn't feel like there's any reason to learn new skills.  What he knows has served him just fine so far.  Who needs to waste time after work learning the latest programming language or the newest testing tool?

6. Conference Connie:
Connie is so excited about tech!  She loves to hear about the latest testing techniques and the latest development trends.  She signs up for webinars, goes to conferences, reads blog posts, and takes courses online.  She knows a little about just about everything!  But she's never actually implemented any of the new things she learns.  She's so busy going to conferences and webinars that she barely has time to do her regular testing tasks.  And besides, trying things out is a lot of work.  It's easier to just see how other people have done it.

What Jim and Connie have in common:
Jim and Connie seem like total opposites at first: Jim doesn't want to learn anything new, and Connie wants to learn everything new.  But they actually have the same problem: they are not growing as testers.  Jim is content to do everything he's already learned, and doesn't see any reason to learn anything more.  But he could be in for a shock one day if his company decides to rewrite the software and he suddenly needs a new skill.  And Connie has lots of great ideas, but great ideas don't mean anything unless you actually try them out.  Her company isn't benefiting from her knowledge because she's not putting it to use.

How not to be Jim or Connie:
It's important to keep your testing skills fresh by learning new languages, tools, and techniques.  You don't have to learn everything under the sun; just pick the things that you think would be most beneficial to your current company, learn them, and then try to implement them in one or two areas.  Your teammates will be thankful for the new solutions you introduce, and you'll be developing marketable skills for your next position.

Be a great tester, not a persona!
We all become some of these personas now and then.  But if we can be aware of them, we can catch ourselves if we start to slip into Automation Annie or Rabbit Hole Ray, or any of the others.  Great testers learn their application better than anyone else, they make good choices about what to test and when, and they keep their skills updated so their testing keeps getting better.

Saturday, April 25, 2020

Book Review: Continuous Testing for DevOps Professionals

For this month's book review, I read Continuous Testing for DevOps Professionals: A Practical Guide from Industry Experts, by various authors and edited by Eran Kinsbruner.  The book is divided into four sections: Fundamentals of Continuous Testing, Continuous Testing for Web Apps, Continuous Testing for Mobile Apps, and The Future of Continuous Testing.


The Fundamentals of Continuous Testing section was my favorite, because it focused the most on developing a good Continuous Testing strategy and the elements required.  In Continuous Testing for Web Apps, strategies for testing Responsive Web Applications (RWAs) and Progressive Web Applications (PWAs) were discussed, along with cross-browser testing strategies.  In Continuous Testing for Mobile Apps, chapters included strategies for testing React Native apps and chatbots, as well as tips for using tools like Appium, Espresso, and XCUITest.  Finally, The Future of Continuous Testing took a look at the uses of AI for continuous testing, as well as strategies for testing IoT-enabled devices and Over-the-Top devices. 


Since this book obviously covered a lot of ground, I'll focus on my favorite section, Fundamentals of Continuous Testing.  Contributor Yoram Mizrachi says there are three types of automated testing failures: test code issues; test lab problems, such as an unstable test environment; and execution problems, such as not enough platforms available to run the tests.  There has been much written about solving test code issues, but not enough about solving environment and execution problems, so I was happy to see the suggestions in this book.  To solve environment problems, Brad Johnson suggests using containers such as Docker and Kubernetes to spin up environments for testing.  Because these environments are temporary, they can be completely controlled in terms of data and application state, so there's less chance of test failures due to environment problems.  And Genady Rashkovan offers a solution for execution problems through setting up an automatic detection system for system failures.  After gathering initial data, this detection system can be programmed to predict when failures are about to happen, and execute an automatic reboot or spin up a new VM to mitigate a failure before it happens. 

I also found Tzvika Shahaf's chapter on using smart reporting very insightful.  He notes that test data reporting is often siloed: reports on UI tests use a different format from the reports on API tests, which are in turn different from the reports on performance tests, and so on.  This makes it very difficult for managers to get a sense of the health of the application.  Shahaf recommends creating a unified report for all tests using this process: tag events so they can be easily identified, normalize the test data so it can be used by a single report, correlate events so similar tests are grouped together, and finally display the events with relevant artifacts.  He advises reducing the noise of defects by determining what the most common causes are for test failures and removing the failures that are false negatives.  For example, a test failure that was caused by the test environment going down does not actually indicate that something has gone wrong with the software, so a test report designed to show whether new code is working correctly doesn't need to display those failures.  

I recommend Continuous Testing for DevOps Professionals for anyone who is working on creating a continuous testing system for their application.  There are suggestions for test automation strategies, solving common mobile automation problems, testing connected devices, creating reliable test data, and much more.  My one complaint about the book was that the Kindle version was formatted poorly: the chapter divisions were unclear, there were often footnotes in the middle of the page, and diagrams were broken into pieces over two or more pages.  For that reason, you may want to purchase a paper copy of the book.  But in spite of these problems, I found the book to be very valuable.  

Saturday, April 18, 2020

Debugging for Testers

Wikipedia defines debugging as "the process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or a system".  Often we think of debugging as something that only developers need to do, but this isn't the case.  Here are two reasons why:  first, investigating the cause of a bug when we find it can help the developer fix it faster. Second, since we write automation code ourselves, and since we want to write code that is of high quality just as developers do, we ought to know how to debug our code.  




Let's take a look at three different strategies we can employ when debugging code.

Console output:
Code that is executing in a browser or on a device generally outputs some information to the console.  You can easily see this by opening up Developer Tools in Chrome or the Web Console in Firefox.  When something goes wrong in your application, you can look for error messages in the console.  Helpful error messages like "The file 'address.js' was not found" can tell you exactly what's going wrong.  

Often an error in an application will produce a stack trace.  A stack trace is simply a series of error statements that go in order from the most recent file that was called all the way back to the first file that was called.  Here's a very simple example: let's say that you have a Node application that displays cat photos.  Your main app.js page calls a function called getCats which will load the user's cat photos.  But something goes wrong with getCats, and the application crashes.  Your stack trace might look something like this:

        Error: cannot find photos
        at getCats.js 10:57
        at app.js 15:16
        at internal/main/run_main_module.js:17:47


  • The first line of the stack trace is the error- the main cause of what went wrong.  
  • The next line shows the last thing that happened before the app crashed: the code was executing in getCats.js, and when it got to line 10, column 57, it couldn't find the photos.  
  • The third line shows which file called getCats.js: it was app.js, and it called getCats at line 15, column 16.  
  • The final line shows what file was called to run app.js in the first place: an internal Node file that called app.js at line 17, column 47. 

Stack traces are often longer, harder to read, and more complicated than this example, but the more you practice looking at them, the better you will get at finding the most important information.

Logging:
Much of what you see in the console output can be called logging, but there are often specific log entries set up in an application's code that record everything that happens in the application.  I'm fortunate to work with great developers who are adept at creating clear log statements that make it easy to figure out what happened when things go wrong.

Log statements often come with different levels of importance, such as Error, Warning, Info, and Debug.  An application can sometimes be set to only log certain levels of statement.  For example, a Production version of an application might be set to only log Errors and Warnings.  When you're investigating a bug, it may be possible to increase the verbosity of the logs so you can see the Info and Debug statements as well.

You can also make your own log statements, simply by writing code that will output information to the console.  I do this when I'm checking to make sure that my automation code is working like I'm expecting it to.  For example, if I had a do-while statement like this:

do {
     counter++
}
while (counter < 10)

I might add a logging statement that tells me the value of counter as my program progresses:

do {
     console.log ("The value of counter right now is: " + counter)
     counter++
}
while (counter < 10)

The great thing about creating your own log statements is that you can set them up in a way that makes the most sense to you.

Breakpoints:
A breakpoint is a place that you set in the code that will cause the program to pause.  Software often executes very quickly and it can be hard to figure out what's happening as you're flying through the lines of code.  When you set a breakpoint, you can take a look at exactly what all your variable values are at that point in the program.  You can also step through the code slowly to see what happens at each line.

Debuggers are generally available in any language you can write code in.  Here are some examples:


I hope this post helps you get started with both debugging your code, and investigating someone else's bugs!


Saturday, April 11, 2020

The Joy of JWTs

Have you ever used a JWT before?  If you have tested anything with authentication or authorization, chances are that you have!  The term JWT is pronounced "jot" and it stands for JSON Web Token.  JWTs are created by a company called Auth0, and their purpose is to provide a method for an application to determine whether a user has the credentials necessary to request an asset.  Why are JWTs so great?  Because they allow an application to check for authorization without passing in a username and password or a cookie.  Requests of all kinds can be intercepted, but a JWT contains non-sensitive data and is encrypted, so intercepting it doesn't provide much useful information.  (For more information about the difference between tokens and cookies, see this post.)  Let's learn about how JWTs are made!



A JWT has three parts, which are made up of a series of letters and numbers and are separated by periods.  One of the best ways to learn about JWTs is to practice using the official JWT Debugger, so go to jwt.io and scroll down until you see the Debugger section.

Part One: Header
The header lists the algorithm that is used for encrypting the JWT, and also lists the token type (which is JWT, of course):
{
  "alg": "HS256",
  "typ": "JWT"
}

Part Two: Payload
The payload lists the claims that the user has.  There are three types of claims:
Registered claims: These are standard claims that are predefined by the JWT code, and they include:
     iss (issuer)- who is issuing the claim
     iat (issued at)- what time, in Epoch time, the claim was issued
     exp (expiration time)- what time, in Epoch time, the claim will expire
     aud (audience)- the recipient of the token
     sub (subject)- what kinds of things the recipient can ask for
Public claims: These are other frequently-used claims, and they are added to the JWT registry.  Some examples are name, email, and timezone.
Private claims: These are claims that are defined by the creators of an application, and they are specific to that company.  For example, a company might assign a specific userId to each of their users, and that could be included as a claim.

Here's an example used in the jwt.io Debugger:
{
  "sub": "1234567890",
  "name": "John Doe",
  "iat": 1516239022
}

Here the subject is 1234567890 (which isn't a very descriptive asset), the name of the user who has access to the subject is John Doe, and the token was issued at 1516239022 Epoch time.  Wondering what that time means?  You can use this Epoch time converter to find out!

Part Three: Signature
The signature takes the first two sections and encodes them in Base64.  Then it takes those encoded sections and adds a secret key, which is a long string of letters and numbers.  Finally it encrypts the entire thing with the HMAC SHA256 algorithm.  See my post from last week to understand more about encoding and encryption.

Putting It All Together
The JWT is comprised of the encoded Header, then a period, the encoded Payload, then another period, and finally the encrypted signature.  The JWT Debugger helpfully color-codes these three sections so you can distinguish them.

If you use JWTs regularly in the software you test, try taking one and putting it in the JWT Debugger.  The decoded payload will give you insight into how your application works.

If you don't have a JWT to decode, try making your own!  You can paste values like this into the Payload section of the Debugger and see how the encrypted JWT changes:
{
     "sub": "userData",
     "userName": "kjackvony",
     "iss": 1516239022,
     "exp": 1586606340
}

When you decode a real JWT, the signature doesn't decrypt.  That's because the secret used is a secret!  But because the first and second parts of the JWT are encoded rather than encrypted, they can be decoded.

Using JWTs
How JWTs are used will vary, but a common usage is to pass them with an API request using a Bearer token.  In Postman, that will look something like this:



Testing JWTs
Now that you know all about JWTs, how can you test them?

  • Try whatever request you are making without a JWT, to validate that data is not returned.  
  • Change or remove one letter in the JWT and make sure that data is not returned when the JWT is used in a request.
  • Decode a valid JWT in the Debugger, change it to have different values, and then see if the JWT will work in your request.  
  • Use a JWT without a valid signature and make sure that you don't get data in the response.  
  • Make note of when the JWT expires, and try a request after it expires to make sure that you don't get data back.  
  • Create a JWT that has an issue time of somewhere in the future and make sure that you don't get data back when you use it in your request.
  • Decode a JWT and make sure that there is no sensitive information, such as a bank account number, in the Payload.  

Have fun, and happy testing!

Saturday, April 4, 2020

New Course! Postman Essential Training

BIG NEWS!  My LinkedIn Learning course on Postman is now live!  This course is an introduction to creating API requests and assertions with Postman.  You'll learn how to create a test collection, run it from the command line, and set it to run as an automated job in Jenkins. 

You can access the course here:  https://www.linkedin.com/learning/postman-essential-training


Encryption and Encoding

We've all encountered mysterious hashed passwords and encrypted texts.  We've heard mysterious terms like "salted" and "SHA256" and wondered what they meant.  This week I decided it was finally time for me to learn about encryption!

The first distinction we need to learn is the difference between encryption and encodingEncoding simply means transforming data into a form that's easier to transfer.  URL encoding is a simple type of encoding.  Here's an example: the Coderbyte website has a challenge called "Binary Reversal".  The URL for the page is  https://coderbyte.com/information/Binary%20Reversal; the space between "Binary" and "Reversal" is replaced with "%20".  There are other symbols, such as !, that are replaced in URL encoding as well.  If you'd like to learn more about URL encoding, you can play around with an encoding/decoding tool such as this one.

Another common type of encoding is Base64 encoding.  Base64 encoding is often used to send data; the encoding keeps the bytes from getting corrupted.  This type of encoding is also used in Basic authentication.  You may have seen a username and password encoded in this way when you've logged into a website.  It's important to know that Basic authentication is not secure!  Let's say a malicious actor has intercepted my login with Basic auth, and they've grabbed the authentication string: a2phY2t2b255OnBhc3N3b3JkMTIz.  That looks pretty secure, right?  Nope!  All the hacker needs to do is go to a site like this and decode my username and password.  Try it for yourself!


Now that we know the difference between encoding and encryption, and we know that encoding is not secure, let's learn about encryption.  Encryption transforms data in order to keep it secret.  

A common method of password encryption is hashing, which is a mathematical way of encrypting that is impossible to decrypt.  This seems puzzling- if a string is impossible to decrypt, how will an application ever know that a user's password is correct?  What happens is that the hashed password is saved in the application's authentication database.  When a user logs in, their submitted password is encrypted with the same hashing algorithm that was used to store the password.  If the hashed passwords match, then the password is correct.

What about if two users have the same password?  If a user somehow was able to access the authentication database to view the hashed passwords and they saw that another user had the same hashed password as they did, that user would now know someone else's password.  We solve this problem through salting.  A salt is a short string that is added to the end of a user's password before it is encrypted.  Each password has a different salt added to it, and that salt is saved in the database along with the hashed password.  This way if a hacker gets the list of stored passwords, they won't be able to find any two that are the same.  

A common hashing algorithm is SHA256.  SHA stands for "Secure Hash Algorithm".  The 256 value refers to the number of bits used in the encoding.  

There are other types of encryption that can be decoded.  Two examples are AES encryption and RSA encryptionAES stands for Advanced Encryption Standard.  This type of encryption is called symmetric key encryption. In symmetric key encryption, the data is encoded with a key, and the receiver of the data needs to have the same key to decrypt the data.  AES encryption is commonly used to transfer data over a VPN.  

RSA stands for Rivest-Shamir-Adleman, who are the three inventors of this encryption method.  RSA uses asymmetric encryption, also called public key encryption, where there is a public key to encode the data and a private key to decode it.  This can work in couple of ways: if the sender of the message knows the receiver's public key, they can encrypt the message and send it; then the receiver decrypts the message with the private key.  Or the sender of the message can sign the message with their private key, and then the receiver of the message can decode it with the sender's public key.  In the second example, the private key is used to show that the message is authentic.  How does the receiver know that the message is authentic if they don't know what the private key is?  They know because if the private key is tampered with, it will be flagged to show that it has been manipulated.  A very common use of RSA encryption is TSL, which is what is used to send data to and from websites.  I wrote about TSL in this post if you'd like to learn more.  

Encryption involves very complicated mathematical algorithms.  Fortunately, we don't have to learn them to understand how encryption works!  In next week's post, I'll talk about how encoding and encryption are used in JWTs.  


Saturday, March 28, 2020

Book Review: Enterprise Continuous Testing

As I've mentioned in previous posts, this year I'm reading one testing-related book a month and reviewing it in my blog.  This month I read Enterprise Continuous Testing, by Wolfgang Platz with Cynthia Dunlop.

This book aims to answer solve the problems often found in continuous testing.  Software continuous testing is defined by the author as "the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release as rapidly as possible".  Platz writes that there are two main problems that companies encounter when they try to implement continuous testing:

1. The speed problem
Testing is a bottleneck because most of it is still done manually
Automated tests are redundant and don't provide value
Automated tests are flaky and require significant maintenance

2. The business problem
The business hasn't performed a risk analysis on their software
The business can't distinguish between a test failure that is due to a trivial issue and a failure that reveals a critical issue

I have often encountered the first set of problems, but I never really thought about the second set.  While I have knowledge of the applications I test and I know which failures indicate serious problems, it never occurred to me that it would be a good idea to make sure that product managers and other stakeholders can look at our automated tests and be able to tell whether our software is ready to be released.


Fortunately, Platz suggests a four-step solution to help ensure that the right things are tested, and that those tests are stable and provide value to the business.

Step One: Use risk prioritization

Risk prioritization involves calculating the risk of each business requirement of the software.  First, the software team, including the product managers, should make a list of each component of their software.  Then, they should rank the components twice: first by how frequently the component is used, and second by how bad the damage would be if the component didn't work.  The two rankings should be multiplied together to determine the risk prioritization.  The higher the number is, the higher the risk; higher risk items should be automated first, and those tests should have priority.

An example of a lower-risk component in an e-commerce platform might be the product rating system: not all of the customers who use the online store will rate the products, and if the rating system is broken, it won't keep customers from purchasing what's in their cart.  But a higher-risk component would be the ability to pay for items with a credit card: most customers pay by credit card, and if customers can't purchase their items, they'll be frustrated and the store will lose revenue.

Step Two: Design tests for efficient test coverage

Once you've determined which components should be tested with automation, it's time to figure out the most efficient way to test those components.  You'll want to use the fewest tests possible to ensure good risk coverage.  This is because the fewer tests you have, the faster your team will get feedback on the quality of a new build.  It's also important to make sure that each test makes it very clear why it failed when it fails.  For example, if you have a test that checks that a password has been updated, and also checks that the user can log in, when the test fails you won't know immediately whether it has failed on the password reset or on the login.  It would be better to have two separate tests in this case.

Platz advocates the use of equivalence classes: this is a term that refers to a range of inputs that will produce the same result in the application.  He uses the example of a car insurance application: if an insurance company won't give a quote to a driver who is under eighteen, it's not necessary to write a test with a driver who is sixteen and a driver who is seventeen, because both tests will test the same code path.

Step Three: Create automated tests that provide fast feedback

Platz believes that the best type of automated tests are API tests, for two reasons: one, while unit tests are very important, developers often neglect to update them as a feature changes, and two, UI tests are slow and flaky.  API tests are more likely to be kept current because they are usually written by the software testers, and they are fast and reliable.  I definitely agree with this assessment!

The author advises that UI tests should be used only in cases where you want to check the presence of or location of elements on a webpage, or when you want to check functionality that will vary by browser or device.

Step Four: Make sure that your tests are robust

This step involves making sure that your tests won't be flaky due to changing test data or unreliable environments.  Platz suggests that synthetic test data is best for most automated tests, because you have control over the creation of the data.  In the few cases where it's not possible to craft synthetic data that matches an important test scenario, masked production data can be used.

In situations where environments might be unreliable, such as a component that your team has no control over that is often unavailable, he suggests using service virtualization, where responses from the other environment are simulated.  This way you have more control over the stability of your tests.

Enterprise Continuous Testing is a short book, but it is packed with valuable information!  There are many features of the book that I didn't touch on here, such as metrics and calculations that can help your team determine the business value of your automation.  I highly recommend this book for anyone who wants to create an effective test automation strategy for their team.

Saturday, March 21, 2020

Adventures in Node: Arrow Functions

This year I've been feeling an urge to really learn a programming language.  There are lots of languages I know well enough to write automation code in- C#, Java, Javascript, and so on- but I decided I wanted to really dive into one language and learn to really understand it.

I decided to go deep with Node.js.  Node is essentially Javascript with a server-side runtime environment.  It's possible to write complete applications in Node, because you can code both the front-end and the back-end of the application.  And I was fortunate enough to find this awesome course by Andrew Mead.  Andrew does a great job of making complicated concepts really simple, so as I am taking the course, I'm finding that things that used to confuse me about Node finally make sense!  And because I like sharing things I've learned, I'll be periodically sharing my new-found understanding in my blog posts.


I'll start with arrow functions.  Arrow functions have been around for a few years now, but I've always been confused by them, because they weren't around when I was first learning to write code.  You may have seen these functions, which use the symbol =>.  They seem so mysterious, but they are actually quite simple!  Arrow functions are simply a way to notate a function to save space and make code easier to read.  I'll walk you through an example.  We'll start with a simple traditional function:

const double = function(x) {
     return x + x
}

double is the name of the function.  When x is passed into the function, x + x is returned.  So if I called the double function with the number 3, I'd get 6 in response.

Now we're going to replace the function with an arrow:

const double = (x) =>  {
    return x + x
}

Note that the arrow comes after the (x), rather than before.  Even though the order is different, function(x) and (x) => mean the same thing.

Now we're going to replace the body of the function { return x + x } with something simpler:

const double = (x) => x + x

When arrow functions are used, it's assumed that what comes after the arrow is what will be returned.  So in this case, x + x means the same thing as { return x + x }.  This is only used if the body of the response is relatively simple.

See?  It's simple!  You can try running these three functions for yourself if you have node installed.  Simply create an app.js file with the first version of the function, and add a logging command:

console.log(double(3))

Run the file with node app.js, and the number 6 will be returned in the console.

Then replace version 1 of the function with version 2, run the file, and you should get a 6 again.  Finally, replace version 2 with version 3, and run the file; you should get a 6 once again.

It's even possible to nest arrow functions!  Here's an example:

const doublePlusTen = (x) => {
    const double = (x) => x + x
    return double(x) + 10
}

The const double = (x) => x + x is our original function.  It's nested inside a doublePlusTen function.  The doublePlusTen is using curly braces and a return command, because there's more than one line inside the function (including the double function).  If we were going to translate this nested function into plain English, it would look something like this:

"We have a function called doublePlusTen.  When we pass a number into that function, first we pass it into a nested function called double, which takes the number and doubles it.  Then we take the result of that function, add 10 to it, and return that number."  

You can try out this function by calling it with console.log(doublePlusTen(3)), and you should get 16 as the response.

Hopefully this information will help you understand what an arrow function is doing the next time you encounter it in code.  You may want to start including arrow functions in your own automation code as well.  Stay tuned in the coming weeks for more Adventures in Node posts!

Saturday, March 14, 2020

How I Would Have Tested the Iowa Caucus App

About six weeks ago, the Iowa Democratic Party held its caucus.  For those who don't live in the United States, this event is one of the first steps in the presidential primaries, which determine who will be running for president in the next presidential election. 

In 2016, the Iowa Caucus used a mobile app created by a company called Interknowlogy in partnership with Microsoft to allow each precinct to report their results.  This app worked successfully in the 2016 caucus.  But this year the Iowa Democratic Party chose to go with a different company to create a new app, which proved disastrous.  Incorrect tallies were reported, and precincts that tried to report via phone were often not able to get through or found that their calls were disconnected.

From reading this assessment, it appears that the biggest problem with the 2020 app was that the software company didn't have adequate time to create the application, and certainly didn't have enough time to test it.  But as a software tester, I found myself thinking about what I would have done if it had been my responsibility to test the app, assuming that there had been enough time for testing.  Below is what I came up with:


Step One: Consider the Use Case

The interesting thing about this application is that unlike an app like Twitter or Uber, the number of users is finite.  There are only about 1700 precincts in Iowa, including a few out-of-state precincts for Iowans who are in the military or working overseas.  So the app wouldn't need to handle tens of thousands of users.  

The users of the application will be the precinct leaders, who will own a wide variety of mobile phones, such as iPhone, Galaxy, or Motorola, and each of those devices could have one of several carriers, such as AT&T, Verizon, or Sprint.  Mobile service might be spotty in some rural areas, and wifi might be unavailable in some locations as well.  So it will be important to test the app on a wide variety of operating systems and devices, with a variety of carriers and connection scenarios.  

Moreover, the precinct leaders will probably vary widely in their technical ability.  Some might be very comfortable with technology, while others might have never installed an app on their phone.  So it will be imperative to make sure that the app is on both the Apple App Store and Google Play, and that the installation is simple.

Some leaders may choose to call in their election results instead of entering them in the app.  So the application should allow an easy way to do this with a simple button click.  This will also be useful as a backup plan in case other parts of the app fail.

Finally, because this is an event of high political importance, security must be considered.  The app should have multi-factor authentication, and transmissions should be secured using https with appropriate security headers.  

Step Two: Create an In-House Test Plan

Now that the users and the use case have been considered, it's time to create an in-house test plan.  Initial testing should begin at least six months before the actual event.  Here is the order that I would direct the testing:
  • Usability testing: the application should be extremely easy to install and use.
  • Functional testing: does the application actually do what it's supposed to do?  Testers should test both the happy path- where the user does exactly what is expected of them- and every possible sad path- where the user does something odd, like cancel the transaction or back out of the page.
  • Device and carrier testing: testers should test on a wide variety of carriers, with a wide variety of providers, and with a wide variety of connection scenarios, including scenarios such as a wifi connection dropping in the middle of a transmission.  Testers should also ensure that the application will work correctly overseas for the remote precincts.  They can do this by crowd-sourcing a test application that has the same setup as the real application.  
  • Load and performance testing: testers should make sure that the application can handle 2500 simultaneous requests, which is much higher than the actual use case.  They should also make sure that page response times are fast enough that the user won't be confused and think that there's something wrong with the application.  
  • Security testing: testers should run through penetration tests of the application, ensuring that they can't bypass the login or hijack an http request.  
  • Backup phone system testing: testers should validate that they can make 2500 simultaneous calls to the backup phone system and be able to connect.  Since there probably won't be 2500 phone lines available, testers should make sure that wait times are appropriate and that callers are told how many people are in the queue in front of them.  

Step Three: External Security Audit

Because of the sensitive nature of the application, the app should be given to an external security testing firm at least four months before the event.  Any vulnerabilities found by the analysis should be addressed and retested immediately.

Step Four: Submit to the Apple App Store and Google Play

As soon as the application passes the security audit, it should be submitted to the app stores for review.  Once the app is in app stores, precinct leaders should be given instructions for how to download the app, log in with a temporary password, and create a new password, which they should save for future use.  

Step Five: End User Testing

Two months before the caucus, precinct leaders will be asked to do a trial run on the application.  Instead of using actual candidates, the names will be temporarily replaced by something non-political, like pizza toppings.  The leaders will all report a fictitious tally for the pizza toppings using the app, and will then use the backup phone number to report the tally as well.  This test will accomplish the following:
  • it will teach the leaders how to use the app
  • it will validate that accurate counts are reported through the app
  • it will help surface any issues with specific devices, operating systems, or carriers
  • it will validate that the backup phone system works correctly
By two weeks before the caucus, any issues found in the first pizza test should have been fixed.  Then a final trial run (again with pizza toppings rather than candidates) will be conducted to find any last-minute issues.  The precinct leaders will be strongly encouraged to make no changes to their device or login information between this test and the actual caucus.

Monday Morning Quarterbacking

There's a term in the US called "Monday Morning Quarterbacking", where football fans take part in conversations after a game and state what they would have done differently if they had been the quarterback.  Of course, most people don't have the skill to be a major-league quarterback and they probably don't have access to all the information that the team had.  

I realize that what I'm doing is the software tester equivalent of Monday Morning Quarterbacking.  Still, it's an interesting thought exercise.  I had a lot of fun thinking about how I would test this application.  The next time you see a software failure, try this thought exercise for yourself- it will help you become a better tester!



Saturday, March 7, 2020

API Contract Testing Made Easy

As software becomes increasingly complex, more and more companies are turning to APIs as a way to organize and manage their application's functionality.  Instead of being one monolithic application where all changes are released at once, now software can be made up of multiple APIs that are dependent upon each other, but which can be released separately at any time.  Because of this, it's possible to have a scenario where one API releases new functionality which breaks a second API's functionality, because the second API was relying on the first and now something has changed.

The way to mitigate the risk of this happening is through using API contract tests.  These can seem confusing: which API sets up the tests, and which API runs them?  Fortunately after watching this presentation, I understand the concept a bit better.  In this post I'll be creating a very simple example to show how contract testing works.


Let's imagine that we have an online store that sells superballs.  The store sells superballs of different colors and sizes, and it has uses three different APIs to accomplish its sales tasks:

Inventory API:  This API keeps track of the superball inventory, to make sure that orders can be fulfilled.  It has the following endpoints:
  • /checkInventory, which passes in a color and size and verifies that that ball is available
  • /remove, which passes in a color and size and removes that ball from the inventory
  • /add, which passes in a color and size and adds that ball to the inventory

Orders API:  This API is responsible for taking and processing orders from customers.  It has the following endpoints:
  • /addToCart, which puts a ball in the customer's shopping cart
  • /placeOrder, which completes the sale

Returns API:  This API is responsible for processing customer returns.  It has the following endpoint:
  • /processReturn, which confirms the customer's return and starts the refund process

Both the Orders API and the Returns API are dependent on the Inventory API in the following ways:
  • When the Orders API processes the /addToCart command, it calls the /checkInventory endpoint to verify that the type of ball that's been added to the cart is available
  • When the Orders API processes the /placeOrder command, it calls the /remove command to remove that ball from the inventory so it can't be ordered by someone else
  • When the Returns API runs the /processReturn command, it calls the /add command to return that ball to the inventory

In this example, the Inventory API is the producer, and the Orders API and Returns API are the consumers.  

It is the consumer's responsibility to provide the producer with some contract tests to run whenever the producer makes a code change to their API.  So in our example:

The team who works on the Orders API would provide contract tests like this to the team who works on the Inventory API:
  • /checkInventory, where the body contained { "color": "purple", "size": "small" }
  • /remove, where the body contained { "color": "red", "size": "large" }

The team who works on the Returns API would provide an example like this to the team who works on the Inventory API:
  • /add, where the body contained { "color": "yellow", "size": "small" }

Now the team that works on the Inventory API can take those examples and add them to their suite of tests.  

Let's imagine that the superball store has just had an update to their inventory. There are now two different kinds of bounce levels for the balls: medium and high.  So the Inventory API needs to make some changes to their API to reflect this.  Now a ball can have three properties: color, size, and bounce.  

The Inventory API modifies their /checkInventory, /add, and /remove commands to accept the new bounce property.  But the developer accidentally makes "bounce" a required field for the /checkInventory endpoint.  

After the changes are made, the contract tests are run.  The /checkInventory test contributed by the Orders API fails with a 400 error, because there's no value for "bounce".  When the developer sees this, she finds her error and makes the bounce property optional.  Now the /checkInventory call will pass.  

Without these contract tests in place, the team working on the Inventory API might not have noticed that their change was going to break the Orders API.  If the change went to production, no customer would be able to add a ball to their cart!

I hope this simple example illustrates the importance of contract testing, and the responsibilities of each API team when setting up contracts.  I'd love to hear about how you are using contract testing in your own work!  You can add your experiences in the Comments section.

Book Review: Perfect Software and Other Illusions About Testing

"Perfect Software and Other Illusions About Testing", by Gerald Weinberg, is the best book on testing I have ever read.  It is a m...