Email Subscription Form

Saturday, January 26, 2019

Easy Free Automation Part II: Component Tests

Last week, I started an eight-part series to demonstrate in a free and very easy way how to write automation for each test type in the Automation Test Wheel.  This week, we're taking a look at component tests.

As with every term in software testing, component tests mean different things to different people.  I like to define a component test as a test for a service that an application is dependent on.  For example, an application might need to make calls to a database, so I would have a component test that made a simple call to that database and verified that it received data in response.  Another example of a component would be an API that the application doesn't own.  In this scenario, I would make a simple call to the API and verify that I got a 200-level response.



Coming up with a free and easy example of automated component tests was, unfortunately, not that easy!  But I have created a very simple Node.js application, which you can download from Github here.

In order to run this application and have my two tests pass, you'll need to have Node, npm, and MongoDB installed, and you'll need to create a very simple Mongo database with just one item in it.  I used most of the instructions in this really awesome tutorial by Chris Buecheler at CloseBrace.com to create this application.  You can use my application along with the instructions in Part 3 of the tutorial to make your Mongo database.  Or you can just clone my application and run it, with the understanding that one of the tests will fail. Or if this seems like too much work, you can just read on and look at my test code!

My extremely simple app is dependent on two things: my Mongo database, and an external API (the really great Restful-Booker API, which I'll be using in next week's blog).  For my component tests, I want to test those two dependencies: that I can make a request to the database and get a positive response, and that I can make a call to the Restful-Booker API and get a positive response. 

I am using Jest and Supertest for these tests.  I have limited experience with Jest, but from what I have seen so far, it is very easy to set up.  Supertest is a library that enhances Javascript testing by making it easier to call APIs. 

I've put my tests in a file called index.test.js, and this is what it looks like:

const request = require('supertest');

describe('Database Connection Test', () => {
  it('Returns a 200 with a call to the DB', async () => {
    const res = await request('http://localhost:3000')
      .get('/userlist')
      .expect(200)
  });
});

describe('Restful-Booker Connection Test', () => {
  it('Returns a 201 with a health check', async () => {
    const res = await request('https://restful-  booker.herokuapp.com')
      .get('/ping')
      .expect(201)
  });
});

The first line of my file is invoking Supertest, so I will be able to do HTTP requests.  Let's take a look at each part of the first test so we can see what it's doing:

describe('Database Connection Test', () => {
  it('Returns a 200 with a call to the DB', async () => {
    const res = await request('http://localhost:3000')
      .get('/userlist')
      .expect(200)
  });
});

The "describe" section comprises the entire test. 

describe('Database Connection Test', () => {
  it('Returns a 200 with a call to the DB', async () => {
    const res = await request('http://localhost:3000')
      .get('/userlist')
      .expect(200)
  });
});

'Database Connection Test' is the title of the test.

describe('Database Connection Test', () => {
  it('Returns a 200 with a call to the DB', async () => {
    const res = await request('http://localhost:3000')
      .get('/userlist')
      .expect(200)
  });
});

The "it" section is where the assertion is called.

describe('Database Connection Test', () => {
  it('Returns a 200 with a call to the DB', async () => {
    const res = await request('http://localhost:3000')
      .get('/userlist')
      .expect(200)
  });
});

'Returns a 200 with a call to the DB' is the title of the assertion.

describe('Database Connection Test', () => {
  it('Returns a 200 with a call to the DB', async () => {
    const res = await request('http://localhost:3000')
      .get('/userlist')
      .expect(200)
  });
});

Here is where we are making a GET request to 'http://localhost:/3000/userlist'.

describe('Database Connection Test', () => {
  it('Returns a 200 with a call to the DB', async () => {
    const res = await request('http://localhost:3000')
      .get('/userlist')
      .expect(200)
  });
});

Here is where we are expecting that we will get a 200 response.

If you'd like to try to run the tests on your own, assuming you have cloned the application, you can do so with these commands:

cd easyFreeComponentTests (this will move you to the directory where you cloned the application)
npm install (this will install all the components you need to run the application)
npm start (this will start the application)

Then go to a new command-line window, cd to where the application is located, and run:

npm test (this will run the tests)

That was a lot of information for just two component tests!  But remember it will be easier to get started when there is already a project to test.  How many tests you have in this area will depend on how many external systems your application is dependent on.  It may be a good idea to create a "health check" that will run your component tests whenever your code is deployed.  That way you will be alerted if there is any error when calling your external systems. 

Next week, we'll move on to my favorite test type: services tests!

Saturday, January 19, 2019

Easy Free Automation Part I: Unit Tests

This post is the beginning of an eight-part series on easy, free ways to automate each area of the Automation Test Wheel.  It's been my experience that there are a number of barriers to learning test automation.  First, the team you are on might not need certain types of automation.  For example, my team has been solely API-focused, so for the last two years I haven't had much reason to do UI automation.  Second, your company may already have invested in specific automation tools, so when you want to learn to use a different tool, you need to do it on your own.  Third, there are many tools that have barriers to using them, such as a high cost or a complicated setup.  And finally, there is not always good documentation out there to help people get started.

In this series, I'm hoping to provide simple, free examples that will demonstrate each area of the Automation Test Wheel in practice, which you can use as a jumping-off point in your own automation journey.  We'll begin with unit tests.



Unit tests are usually written by developers, but it's a good idea for all software testers to understand what they are and how they work.  Unit tests test just one method or function, and they aim to exercise as many paths of that method or function as possible.  The major benefit of unit tests is that they provide extremely fast, accurate feedback.

I'm going to use Python and Pytest to demonstrate how unit tests work.  I am admittedly not an expert in either, but I managed to put together a little project with just one function.  If you would like to try the code out yourself, I have added it to GitHub here.  (If you have never cloned a Git repository before, you can find some instructions here.)

To run the tests, you will need to have Python installed.  There are some great directions for installing Python here.  Installing Python 3 will most likely install pip, the Python package installer, but if it doesn't, you can get installation instructions here.  Finally, you'll need to use pip to install Pytest; there are instructions for that here.  (If all this seems like too much work, you can just read on and look at the examples.)

In the file called _init_.py, I've written a simple function called isItADozen that determines whether a number of objects equals a dozen.  Here's the code:

def isItADozen(input): 
if type(input) != int:
result = "This is not a number"
elif input == 12:
result = "Yup, it's a dozen!"
elif input < 12 and input > 0:
result = "Nope, you have less than a dozen"
elif input > 12:
result = "You have more than a dozen here"
elif input <=0:
result = "You don't have any at all!"
return result

The function takes the input and first checks to see if it's an integer.  If not, it returns the message "This is not a number".  Then it checks to see whether the number is 12, or is more than 12, or is between 0 and 12, or is less than 0.  Depending on which statement is true, it will return an appropriate message.  

Before we look at the unit tests, let's think about how we would test this function manually, assuming it had a UI interface.  We'd check to make sure it recognized 12 as a dozen, we'd check to make sure it recognized when a number was more than or less than a dozen, and we'd also try some of the usual QA tricks, like passing in a negative number or "FOO".  

That's exactly what we'll do with our unit tests!  Here is the code in my test.py file:

import unittest

from my_test import isItADozen

class TestDozen(unittest.TestCase):
    def test_a_dozen(self):
        result = isItADozen(12)
        self.assertEqual(result, "Yup, it's a dozen!")

    def test_more_than_a_dozen(self):
    result = isItADozen(15)
    self.assertEqual(result, "You have more than a dozen here")

    def test_less_than_a_dozen(self):
    result = isItADozen(10)
    self.assertEqual(result, "Nope, you have less than a dozen")

    def test_less_than_zero(self):
    result = isItADozen(-1)
    self.assertEqual(result, "You don't have any at all!")

    def test_not_a_number(self):
    result = isItADozen("FOO")
    self.assertEqual(result, "This is not a number")

if __name__ == '__main__':
    unittest.main()

I have five test cases here, aptly named:
test_a_dozen
test_more_than_a_dozen
test_less_than_a_dozen
test_less_than_zero
test_not_a_number

In each test, I call the isItADozen function with the number I want to test with.  Then I assert that the result I got matches the result I was expecting.  

To run my tests, I simply go to the command line, navigate to the unitTestProject folder, and type:

python3 test.py --verbose

(If you don't know how to use your command line, see my blog post from several years ago.)

You may not need to use the "3" in "python3"- I need to do this because I have Python 2.7.1 installed as well and I want to run on the newer version.  You don't need to do the "--verbose" command; I like to do this because then it shows me the names of my tests as they pass instead of ".....", which isn't much fun.  

Once you have the tests running, try making one fail by changing one of the assert statements like this:

self.assertEqual(result, "This is not the real result message")

Then see if you can change a test so that you are passing in a different value.  For example, you could pass the number 13 into the "test_more_than_a_dozen" test instead of 15.  

Once you have mastered that, you may want to copy my entire test folder and see if you can write your own simple function and unit tests.  If you are more familiar with another programming language, you can try writing unit tests in that language as well.  

Hopefully this has been an easy way to get you started with writing unit tests!  We'll continue with the Easy Free Automation series next week.  

Saturday, January 12, 2019

Automation Wheel Strategy: Moving from What to How to When to Where

Last week, we talked about how I would decide what to test in a simple application in terms of testing every segment of the Automation Test Wheel.  I find it's very helpful to answer the question "What do I want to test?" before I think about how I'm going to test it.  This week we'll look at how to take the "What" of automated testing and continue on with how I want to test, when I want to test it, and where (what environment) I'm going to test it in.  As a reminder, my hypothetical application is a simple web app called Contact List, which allows a user to add, edit, and delete their contacts.


How I'm Going to Test:

I'm going to run my unit and component tests directly in the code.  Unit tests are designed to specifically run in the code, because they are the smallest possible unit and test the code directly.  My component tests are very simple- just one call to the database and one call for authentication- so I will run those directly from my code as well.

For my services tests, I'm going to use Postman, which is my favorite API testing tool.  I'll run the Postman tests using Newman, which is the command-line tool for Postman.  I'll also include some security tests in Postman, validating that any requests without appropriate authentication return an appropriate error, and I will also do some performance checks here, verifying that the response times to my API requests are within acceptable levels.

For my UI tests, I'm going to use Selenium and Jasmine, because I like the assertion style that Jasmine uses.  I'll be adding a few security tests here, making sure that pages do not load when the user doesn't have access to them.  I'll also be integrating my visual tests into my Selenium tests, using Applitools, and I'll be using both Selenium and Applitools to run my accessibility tests. 

Finally, I would set up a performance testing tool such as Pingdom that would consistently monitor my web page loading times and alert me when load times have slowed.  

When I'm Going to Test:

Now that I've figured out how I'm going to test, it's time to think about when I'm going to run my tests.  I'm going to organize my tests into four times.

With every build: every time new code is pushed, I'm going to run my unit tests, component tests, and Newman tests.  These tests will give me very fast feedback.  I'm not going to run any UI tests at this time, because I don't want to slow my feedback down.

With every deploy: every time code is deployed to an environment, I'm going to run all my Newman tests, and a small subset of my Jasmine tests.  My Jasmine tests will include at least one visual check and one security check.  This will ensure that the API is running exactly as it should and that there are no glaring errors in the UI.

Daily: I'll want to run all of my Newman tests and all of my Jasmine tests early in the morning, before I start my workday.  When I begin my workday I'll have a clear indication of the health of my application.

Ongoing: As mentioned above, I'll have Pingdom monitoring my page load times throughout the day to alert me of any performance problems.  I'll also set up a job to run a small set of Newman tests periodically throughout the day to alert me of any server downtime.

Where I'm Going to Test:

Now that I've decided how and when to test, I need to think about where to test.  Let's imagine that my application has four different environments: Dev, QA, Stage, and Prod.  My Dev environment is solely for developers.  My QA environment is where code will be deployed for manual and exploratory testing.  My Stage environment is where a release candidate will be prepared for Production.  Let's look at what I will test in each environment.

Dev: My unit and component tests will run here whenever a build is run, as well as my Newman and Jasmine tests whenever a deploy is run.

QA: I'll run my full daily Newman and Jasmine suites here, and I'll run my full Newman suite and a smaller Jasmine suite with a deploy.

Stage: I'll run the full sets of Newman and Jasmine tests when I deploy.  This is because the Stage environment is the last stop before Prod, and I'll want to make sure we haven't missed any bugs.  I'll also run my Pingdom monitoring here, to catch any possible performance issues before we go to Prod.

Prod: I'll run a small set of daily Newman and Jasmine tests here.  I'll also point my Pingdom tests to this environment, and I'll have those tests and a set of Newman tests running periodically throughout the day.

Putting it All Together:

When viewed in prose form, this all looks very complicated.  But we have actually managed to simplify things down to four major test modalities tested at four different times.  Are we covering all the areas of the Automation Test Wheel?  Let's take a look:

CodeNewmanJasminePingdom
UnitServicesUIPerformance
ComponentSecuritySecurity
PerformanceVisual
Accessibility

We are covering each different area with one or more testing modalities.  Now let's visualize our complete test plan:

HourlyDailyBuildDeploy
DevUnitNewman
ComponentJasmine
Newman
QANewmanNewman
JasmineJasmine
StagePingdomNewman
Jasmine
ProdPingdomNewmanNewman
NewmanJasmineJasmine

Viewed in a grid like this, our plan looks quite simple!  By considering each question in turn:

  • What do we want to test?
  • How are we going to test it?
  • When will we run our tests?
  • Where will we run them?

We've been able to come up with a comprehensive plan that covers all areas of the testing wheel and tests our application thoroughly and efficiently.  

Saturday, January 5, 2019

The Automation Test Wheel in Practice

Last week's blog post, "Rethinking the Pyramid: The Automation Test Wheel", sparked many interesting discussions on LinkedIn, Twitter, and in the comments section of this blog!  The general consensus was that the Test Pyramid is still useful because it reminds us that tests closest to the code are the fastest and most reliable to run, and that the Automation Test Wheel reminds us to make sure to include categories such as security, accessibility, and performance testing.  Also, a reader pointed us to Abstracta's Software Testing Wheel, which looks at the definition of quality from a number of different perspectives.

This week I'm talking about how to put the Automation Test Wheel into practice.  Let's imagine that I have a simple web app called Contact List.  It allows a user to log in, view a list of their contacts, and add new contacts.  I want to design a complete automation strategy for this application that will enable my team to deploy all the way up to production confidently.  In order to feel confident about the quality of my application, I'll want to be sure to include tests from every segment of the Automation Test Wheel.


Unit Tests: I will make sure that every function of my code has at least one unit test.  I'll run these tests using mock objects.  For example, I will create a list of mock contacts and a mock new contact, add the new contact, and verify that the new contact has been added to the list of mock contacts.  I'll update a contact with new data and verify that the contact has been updated in the list.  I'll create a mock contact with invalid data and verify that attempting to add the contact results in an appropriate error.  These are just some examples; for each function in my app, I'll want to have several tests which exercise all possible code paths.

Component Tests:  My application is very simple and relies on just one database.  The database is used for both authentication and for retrieving the contact data.  I will include one test for each function; I'll send an authentication request for a valid user and verify that the user is authenticated, and I'll make one request to the database to retrieve a known contact, and verify that the contact is retrieved.

Services Tests: My application has an API which allows me to do CRUD operations (Create, Read, Update, Delete) on my contacts.  I have a GET endpoint which allows me to retrieve the list of contacts, and a GET endpoint which allows me retrieve one specific contact.  I have a POST endpoint which allows me to add a contact to the contact list.  I have a PUT endpoint which allows me to update the data for an existing contact, and I have a DELETE endpoint which allows me to delete an existing contact.  For each one of these endpoints, I will have a series of tests.  The tests will include both happy paths and error paths.  I'll verify that in each request, the response code is correct and the response body is correct.  For example, with the GET endpoint where I retrieve one contact, I'll verify that a GET on an existing contact returns a 200 response and the correct data for the contact.  I'll also verify that a GET on a contact that doesn't exist returns a 404 Not Found response.

User Interface (UI) Tests: This is where I will be testing in the browser, doing activities that a real user would do. A real user will want to fetch their list of contacts, add a new contact, update an existing contact, and delete a contact.  I will have one test for each of these activities, and each test will have a series of assertions.  To take one example, when I add a new contact, I will navigate to the new contact page, fill in all the form fields, and click the Save button.  Then I will navigate to the list page and verify that my new contact appears on the page.

Visual Tests: This is where I will verify that elements are actually appearing on the page the way I want them to.  I will navigate to the list page and verify that all of the columns are appearing on the page.  I will navigate to the add contact page and verify that all of the form fields and their labels are appearing appropriately on the page.  I will trigger all possible error messages (such as the one I would receive if I entered an invalid zip code), and verify that the error appears correctly on the screen.  And I will verify that all of the buttons needed to use the application are rendering correctly.

Security Tests: I will run security tests at both the Services layer and the UI layer.  I will test the API operations relating to authenticating a user, verifying that only a user with the correct credentials will be authenticated.  I will test every request endpoint to make sure that only those requests with a valid token are executing; requests without a valid token should return a 401.  For the UI layer, I will conduct a series of login tests that validate that only a user with correct credentials is logged in, and I will verify that I cannot navigate to the list page or the add contact page without being logged in.

Performance Tests: I will set benchmarks for both the server response time and the web page load time.  To measure the server response, I will add assertions to my existing Services tests that will verify that the response was returned within that benchmark.  To measure the web page load time, I will run a UI test that will load each page and assert that the page was loaded within the benchmark time.

Accessibility Tests:  I want to make sure that my application can be used by those with visual difficulties.  So I will run a set of UI and Visual tests on each page where I validate that I can zoom in and out on the text and that scroll bars appear and disappear depending on whether they are needed.  For example, if I zoom in on the contact list I will now need a vertical scrollbar, because some of the contacts will now be off the page.

With this series of automated tests, I will feel confident that I'll be able to deploy changes to my application and discover any problems quickly.

I've received a few questions over the last week about what percentage of total tests each of spokes in the Automation Test Wheel should have.  The answer will always be "It depends".  It will depend on these and many other considerations:

  • How many other services does your application depend on?  If it depends on many external services, you'll need more Component tests.
  • How complicated is your UI?  If it has just a page or two, you'll need fewer UI and Visual tests.  If it has several pages with many images, you'll need more UI and Visual tests.
  • How complicated is your data structure?  If you are dealing with large data objects, you'll need more Services tests to validate that CRUD operations are being handled correctly.
  • How secure does your application need to be?  An application that handles personal banking will need many more Security tests than an application that saves pictures of kittens.
  • How performant does your application need to be?  A solitaire game doesn't need to be as reliable as a heart monitor.

The beauty of the Automation Test Wheel is that it can be tailored to all types of software applications!  By considering each spoke in the wheel, we'll be sure that we are creating great automated test coverage.

Three Ways to Test Output Validation

Last week , I wrote about the importance of input validation for the security, appearance, and performance of your application.  An astute r...