Email Subscription Form

Saturday, December 28, 2019

New Year's Resolutions for Software Testers

I love New Year's Day!  There's something exciting about getting a fresh start and imagining all that can be accomplished in the coming year.  The new year is an opportunity to think about how we can be better testers, how we can share our knowledge with others, and how we can continue to improve the public perception of the craft of software testing.

Image by <a href="https://pixabay.com/users/wonderwoman627-1737396/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=4701494">M Harris</a> from <a href="https://pixabay.com/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=4701494">Pixabay</a>


Here are some suggestions for resolutions you could make to improve your testing and the testing skills of those around you:

Speak Up
Because testers are sometimes made to feel like second-class citizens compared to software developers, they might feel timid about voicing their opinions.  But testers often know more about the product they test than the developers, who are usually working in one small area of the application.  This year, resolve to speak up if you see an issue with the product that you think would negatively impact the end user, even if it isn't a "bug".  Similarly, speak up if you find a bug that the team has dismissed as unimportant, and state why you think it should be fixed.  Advocate for your user!  Make sure that the product your customers are getting makes sense and is easy to use.

Pay Attention in Product Meetings
I'm sure my Product Owner would be sad to read this (sorry, Brian!) but I find product meetings boring.  I know that the small details of the user's experience are important, and I'm so glad that there are people who care about where a notification badge is displayed.  But listening to the discussion where that decision is being made is not very exciting to me.  However, I am so glad that I am included in these meetings, and every year I resolve to pay more attention to product decision-making than I did the year before, and to contribute when I have information that I think will be helpful.  Attending product meetings allows me to hear why certain choices are made, and also helps me think about what I need to test when a new feature comes available.

Do Some Exploratory Testing
I suspect that most of us have some area of the application we test where we have a sneaking suspicion that things aren't working quite right.  Or there's a really old area of the application that no one knows how to use, because the people who initially built and tested it have since left the company.  But we are often too busy testing new features and writing test automation to take the time to really get to know the old and confusing areas of an application.  This year, resolve to set aside a few hours to do exploratory testing in those areas and share your findings with the team.  You may find some long-buried bugs or features that no one knows about!

Streamline Your Operation
Are there things your team does that could be done more efficiently?  Perhaps you have test automation that uses three different standards to name variables, making the variable names difficult to remember.  Perhaps your methods of processing work items isn't clear, so some team members are assigning testing tickets while others are leaving them for testers to pick up.  Even if it seems like a small problem, these types of inefficiencies can keep a team from moving as quickly as it could.  Resolve to notice these issues and make suggestions for how they can be improved.

Learn Something New
This year, learn a new tool or a new language.  You don't have to become a master user; just learn enough to be able to say why you are using your current language or tool over the new one you've learned.  Or you could discover that the new language or tool suits your needs better, in which case you can improve your test automation.  Either way, learning something new makes you more employable the next time you are looking for a new position.

Share Your Knowledge With Your Team
Don't be a knowledge hoarder!  Your company and your software will be better when you share your knowledge about the product you are testing and the tools you are using to test it.  Sometimes misguided people hold on to knowledge thinking it will make them indispensable.  This will not serve to keep you employed.  In today's world, sharing information so that the whole team can be successful is the best way to be noticed and appreciated.  Resolve to hold a workshop for the other testers on your team about the test automation you are writing, or create documentation that shows everyone how to set up a tricky test configuration.  Your teammates will thank you!

Share Your Knowledge With the Wider World
If I had one wish for software testers for the year 2020, it would be that we would be seen by the wider tech community as the valuable craftspeople we are.  If you are an awesome software tester- and I'm guessing you are because you are taking the time to read a blog about testing- share your skills with the world!  Write a blog post, help someone on Stack Overflow, or present at a local testing meetup.  You don't have to be the World's Most Authoritative Expert on whatever it is you are talking about, nor do you have to be the Best Speaker in the World.  Just share the information you have freely!  We will all benefit from your experience.

What New Year's resolutions do you have for your software testing?  Please share in the comments below!




Saturday, December 21, 2019

A Question of Time

Time is the one thing of which everyone gets the same amount.  Whether we are the CEO of a company or we are the intern, we all have 1440 minutes in a day.  I've often heard testers talk about how they don't have enough time to test, and that can certainly happen when deadlines are imposed without input from everyone on the team.  I've written a blog post about time management techniques for testers, but today I'm going to tackle the question:

Is it worth my time to automate this task?



Sometimes we are tempted to create a little tool for everything, just because we can.  I usually see this happen with developers more than testers, but I do see it with some testers who love to code.  However, writing code does not always save us time.  When considering whether to do a task manually or to write automation for it, ask yourself these four questions:

1. Will I need to do this task again?

Recently my team was migrating files from one system to another system.  I ran the migration tool manually and did manual checking that the files had migrated properly.  I didn't write any automation for this, because I knew that I was never going to need to test it again.

Contrast this with a tester from another team who is continually asked to check the UI on a page when his team makes updates.  He got really tired of doing this again and again, so he created a script that will take screenshots and compare the old and new versions of the page.  Now he can run the check with the push of a button.

2. How much time does this task take me, and how much time will it take me to write the code?

Periodically my team's test data gets refreshed, and that means that the information we have for our test users sometimes gets changed.  When this happens, it takes about eight hours to manually update all the users.  It took me a few hours to create a SQL script that would update the users automatically, but it was totally worth my time, because now I save eight hours of work whenever the data is refreshed.

But there have been other times where I've needed to set up some data for testing, and a developer has offered to write a little script to do it for me.  Since I can usually set up the data faster than they can create the script, I decline the offer.

3. How much time will it take to maintain the automation I'm writing?

At a previous job, I was testing email delivery and I wanted to write an automated test that would show that the email had actually arrived in the Gmail test account.  The trouble was that there could be up to a ten minute delay for the email to appear.  I spent a lot of time adjusting the automated test to wait longer, to have retries, and so on, until finally I realized it was just faster for me to take that assertion out of the test, and manually check the email account from time to time.

However, my team's automated API smoke tests take very little time to maintain, because the API endpoints change so infrequently that the tests rarely need to change.  The first API smoke test I set up took a few days; but once we had a working model it became very easy to set up tests for our other APIs.

4. Does the tool I'm creating already exist?

At a previous company, the web team was porting over many customers' websites from one provider to another.  I was asked to create a tool that would crawl through the sites and locate all the pages, and then crawl through the migrated site to make sure all the pages had been ported over.  It was really fun to create this tool, and I learned a lot about coding in the process.  However, I discovered after I made the tool that web-crawling software already exists!

But in that particular month I did have the time to create the tool, and the things I learned helped me with my other test automation.  So sometimes it may be worth "reinventing the wheel" if it will help you or your team.

The Bottom Line: Are you saving or wasting time?

All of these questions come down to one major consideration, and that is whether your task is saving or wasting time.  If you are a person who enjoys coding, you may be tempted to write a fun new script for every task you need to do; but this might not always save you time.  Similarly, if you don't enjoy coding, you might insist on doing repetitive tasks manually; but using a simple tool could save you a ton of time.  Always consider the time-saving result of your activities!

Saturday, December 7, 2019

Measuring Quality

The concept of measuring quality can be a hot-button topic for many software testers.  This is because metrics can be used poorly; we've all heard stories about testers who were evaluated based on how many bugs they found or how many automated tests they wrote.  These measures have absolutely no bearing on software quality. A person who finds a bug in three different browsers can either write up the bug once or write up a bug for each browser; having three JIRA tickets instead of one makes no difference in what the bug is!  Similarly, writing one hundred automated tests where only thirty are needed for adequate test coverage doesn't ensure quality and may actually slow down development time.

But measuring quality is important, and here's why: software testers are to software what the immune system is to the human body.  When a person's immune system is working well, they don't think about it at all.  They get exposed to all kinds of viruses and bacteria on a daily basis, and their immune system quietly neutralizes the threats.  It's only when a threat gets past the immune system that a person's health breaks down, and then they pay attention to the system.  Software testers have the same problem: when they are doing their job really well, there is no visible impact in the software.  Key decision-makers in the company may see the software and praise the developers that created it without thinking about all the testing that helped ensure that the software was of high quality.



Measuring quality is a key way that we can demonstrate the value of our contributions.  But it's important to measure well; a metric such as "There were 100 customer support calls this month" means nothing, because we don't have a baseline to compare it to.  If we have monthly measurements of customer support calls, and they went from 300 calls in the first month, to 200 calls in the second month, to 100 calls in the third month, and daily usage statistics stayed the same, then it's logical to conclude that customers are having fewer problems with the software.

With last week's post about the various facets of quality in mind, let's take a look at some ways we could measure quality.

Functionality:
How many bugs are found in production by customers?
A declining number could indicate that bugs are being caught by testers before going to production.
How many daily active users do we have? 
A rising number probably indicates that customers are happy with the software, and that new customers have joined the ranks of users.

Reliability:
What is our percentage of uptime?  
A rising number could show that the application has become more stable.
How many errors do we see in our logs?  
A declining number might show that the software operations are generally completing successfully.

Security:
How many issues were found by penetration tests and security scans?  
A declining number could show that the application is becoming more secure.

Performance:
What is our average response time?
A stable or declining number will show that the application is operating within accepted parameters.

Usability:
What are our customers saying about our product?
Metrics like survey responses or app store ratings can indicate how happy customers are with an application.
How many customer support calls are we getting?
Increased support calls from customers could indicate that it's not clear how to operate the software.

Compatibility:
How many support calls are we getting related to browser, device, or operating system?
An increased number of support calls could indicate that the application is not working well in certain circumstances.
What browsers/devices/operating systems are using our software?
When looking at analytics related to app usage, a low participation rate by a certain device might indicate that users have had problems and stopped using the application.

Portability:
What percentage of customers upgraded to the new version of our software?
Comparing upgrade percentages with statistics of previous upgrades could indicate that the users found the upgrade process easy.
How many support calls did we get related to the upgrade?
An increased number of support calls compared to the last upgrade could indicate that the upgrade process was problematic.

Maintainability:
How long does it take to deploy our software to production?
If it is taking longer to deploy software than it did during the last few releases, then the process needs to be evaluated.
How frequently can we deploy?
If it is possible to deploy more frequently than was possible six months ago, then the process is becoming more streamlined.

There's no one way to measure quality, and not every facet of quality can be measured with a metric.  But it's important for software testers to be able to use metrics to demonstrate how their work contributes to the health of their company's software, and the above examples are some ways to get started.  Just remember to think critically about what you are measuring, and establish good baselines before drawing any conclusions.

Saturday, November 30, 2019

The Hierarchy of Quality


About a year ago, I wrote a post suggesting that we could think about automation in terms of a test wheel, where each section of the wheel represented a different type of automation.  A reader who works at Abstracta told me that my wheel reminded her of the wheel they use to think about all of the different facets of quality.  I thought their wheel was so great that I knew I would eventually want to write a post about it.

I've been thinking about the different types of quality mentioned in Abstracta's Software Testing Wheel, and wondering what I would do if I was brought on to a project that had never had any testing and I needed to start from scratch.  Where would I begin my testing?  I thought about what the most important needs are for quality, and I was reminded of Maslow's Hierarchy of Needs.


For those who are unfamiliar with this psychological concept, this is a theory that all human beings need to have certain basic needs met before they can grow as people.  The needs are as follows:

1. Physiological needs- food, water, shelter
2. Safety needs- security, property, employment
3. Love and belonging- friendship, family
4. Esteem- respect, self-esteem
5. Self-actualization- becoming the best person one can be

Looking at this list, it's clear that physiological needs are the most important.  After all, it doesn't matter if you have high self-esteem if you have no water to drink.  Each successive need builds on the more important one before it.

With this in mind, I realized that there is a Hierarchy of Quality- certain conditions of quality that need to be met before a team can move on to the next area of quality.  Here is my perception of where the different areas of the Abstracta test wheel fall in the hierarchy:

1.  Functionality and Reliability

These two areas share the most important spot.  Functionality means that the software does what it's supposed to do.  This is critical, because without this, the application might as well not exist.  Imagine a clock app that didn't tell time, or a calculator that didn't add numbers.

Reliability means the software is available when it's needed.  It doesn't really matter if the app works if a user can't get to it when they need it.

Once these quality needs have been met, we can move on to the next level:

2. Security and Performance

Security is important because users need to feel that their data is being protected.  Even applications that don't have login information or don't save sensitive data still need to be protected from things like cross-site scripting, which might allow a malicious user to gain control of someone else's device.

Performance is also important, because no one wants to wait for sixty seconds for a web page to load.  If an application isn't responsive enough, the users will go elsewhere.

Now that the application is secure and performant, we can go to the third level:

3. Usability and Compatibility

This is the level where we make sure that as many users as possible have a good experience with the application.  Usability means that the workflows of an application are intuitive so users don't get confused.  It also means that the application is internationalized, so users all around the world can use it, and that it is accessible, so users with visual, auditory, or physical differences can use it as well.

Compatibility means that users with different operating systems, browsers, or devices can use the application.  Have you ever filled out a form in a browser and had it not save correctly when you clicked the button?  This has happened to me more than once, and I've needed to fill out the form again in a different browser to have it save correctly.  It's important that our users have a positive experience no matter where they are using the software.

Now that we've made our application accessible to as many users as possible, it's time to go on to the next level:

4. Portability

Portability covers how easy it is to move an application from one place to another.  One example of portability would be the way I can access my Google Drive files on my laptop, my tablet, and my phone.  Portability also refers to how easy an application can be installed or updated.  Also, we want our application to keep working when a device has an operating system upgrade.

Finally, we have thought about all of our users' needs.  Now it's time for one more level:

5. Maintainability

This is a level of quality that benefits the software team.  Maintainability refers to how easily an application can be updated.  Is it possible to add new APIs or update existing ones?  How easy is it to test the system?  Is it easy to deploy new code?  Is it easy for other teams to use the code?  Is the code clear and easy to understand?

When software is accessible and easy to use for all end users, AND is easy to work with and maintain for the development team, then truly high quality has been achieved.

I hope that this Hierarchy of Quality will help you make decisions about what areas of an application should be focused on first when there are a number of different quality areas competing for your team's attention.

What do you think of this order?  Do you agree or disagree with where I placed items in the hierarchy?  Are there any missing quality areas?  Let me know in the comments below!

Saturday, November 23, 2019

...but TEST like a QA Engineer!

In last week's post, I wrote about how it is important for software testers to code like a developer.  But there is a second half of the sentence "Code like a developer...", and that is that software testers should be TESTING. 

I'm not a stickler for using the right word for testing-related concepts, which is why I use the term "test automation".  But automated testing is really automated checking.  Automated tests serve a very valuable purpose in that they can run regression checks at any hour of any day, without human intervention.  But they do not actually test the software.



A sad casualty of the very important move towards test automation is the QA Engineer.  Many large software companies don't employ QA Engineers any more, feeling sure that Software Developers in Test are all that's needed to validate the quality of their software.  And many Software Developers in Test focus solely on the automation, working from acceptance criteria in development stories and looking at the code rather than manually interacting with the software.  How is that trend working out for end users?  

Just this week, I experienced the following:  I received a (legitimate) email that I had some money to accept from PayPal.  The email contained a button to click that said "Accept the Money".  When I clicked it, I got a message that said "The previous page is sending you to an invalid URL."  

Last week when I was using a mobile app, a screen that I needed stayed permanently blank.  And in a post I wrote two weeks ago, I mentioned that while I was writing, Blogger had a page load error when I tried to add an image.  

Three weeks, three major companies, three bugs.  This is what comes from not employing people who think and act like testers.  

It's true that the whole software development team owns quality, and that quality is everyone's responsibility.  And there are also non-QA people who care deeply about certain areas of an application:
  • Developers write unit tests to check the quality of their code
  • Product Owners care about whether the feature does what it's supposed to
  • UX Designers care about whether the user journey is intuitive
  • Security testers check the software for vulnerabilities
  • Performance engineers care about the response time the application 

But only QA Engineers care so much about the quality of the application that they'll do things like:
  • Type ~!@#$%^&*()-=_+{}|[]\:";'<>?,./ into every text field to test for invalid character handling
  • Try to purchase -1, 99999999999, 1.3415, and foo of something
  • Enter a birth year of 3019 to see what happens
  • Click every button twice to check for multiple submissions
  • Click the forward and back button on every single page of a website
  • Test 48 different permutations of feature sets to be as thorough as possible 
  • Create dozens of test users with many varieties of security settings, to have scenarios ready for testing at a moment's notice
  • Become an expert on a particular feature and provide documentation and assistance to other testers
  • Test the same thing in the QA environment, the Staging environment, the Demo environment, and the Production environment to make absolutely sure that the feature is working everywhere 
  • Test every feature on every supported browser and every supported mobile device

This is why we need software testers who TEST.  We need people who will continually ask themselves "How could we break this?", "What haven't we tested yet?", and "What features will be used with this?".  We need software testers who don't rush into writing automation without first interacting with a feature.  We need software testers who remember that the goal of all their efforts is to have a user who has a positive, bug-free experience.  


Saturday, November 16, 2019

Code Like a Developer...

I'll be honest: I don't love coding.  Don't get me wrong, I love test automation!  I love the feeling of solving a technical challenge and coming up with a great way to automatically assert that software is doing what it's supposed to be doing.  I love maintaining and updating my automated test suites.  But the actual writing of the code is not my favorite thing.  Whenever I find myself having to write another nested "for" loop, I sigh inwardly.

However, with all the coding I've done over the years, I've come to really appreciate the work that software developers do!  Software is complex stuff, and developers have come up with great ways to set standards, share repositories, and review each other's work.

The test automation code we write is important; just as important as the code the software developers are writing.  Therefore, we should write our code with the same standards the developers use.  Here are a few suggestions for coding practices you should adopt:


Your code should live in the same repository as the developers' code.
This is for a few reasons: first, the developers' unit tests reside with the code, so it makes sense to have your integration and UI tests in the same place.  Secondly, it's easier to maintain one repository instead of two; and finally, having your code in the same place serves to remind the whole team that test automation is everyone's responsibility.  

Write clean code.
When I first got started with test automation, I had absolutely no idea what I was doing.  All I had was my manual testing experience and a couple of courses in Java and C++.  I did a lot of Googling and a lot of guessing as I put together my first Selenium tests.  After much work, they ran and (mostly) passed, but boy, were they lousy!  I didn't know anything about how to write clean code.  Fortunately I had great developers around to teach me how to make my code better.

Here are some of the principles of writing clean code:
  • Keep it simple.  Always look over your code and ask yourself if there's a simpler way of doing what it is that you are trying to do.  Sometimes the obvious solution to a testing problem only becomes clear after you have solved it in a complicated way; now it's time to go back and solve it more elegantly. 
  • Don't repeat yourself.  If there's something you're doing in more than one test- for example, logging in to the application- write a method that you can call instead of putting those steps into every test.  Similarly, create a file where you save all of your variables and element locators, and have all of your tests refer to that file.  That way if a variable or a locator changes, you can make the change in one place rather than several
  • Be consistent.  Consistent code is easier to read.  Be consistent with your casing: if you have a variable for the user's first name called "firstName", don't make the variable for the user's last name "LastName".  Follow the conventions that your developers are using: if they indent with two spaces, you should too.  If they put their opening curly braces on a separate line, you should as well.
  • Comment your code.  It's not always obvious what test automation code is doing at first glance, and while you might be quite used to the syntax you are using for your tests, your developers might not be familiar with it.  Simple comments like "Polling the queue for the delete request" can be really helpful in explaining your intent.  Moreover, what might seem really obvious to you now might not be obvious in three months when you need to update the test!  Your future self will thank you for the comments you write today.  
Solicit feedback.  
Like me, you may not have had a thorough grounding in good coding principles.  Some of the best software testers I've had the pleasure of working with did not major in Software Engineering.  If you did not go through rigorous training in software development, it's important to get feedback from the developers you work with.  On my team, the software testers often review and approve each other's code, but I also like to have my code checked by developers to make sure I'm not doing anything unusual or creating steps that could possibly result in a race condition.  

Test automation helps the whole team by speeding up the feedback process and freeing testers up to do more exploratory testing.  We owe it to our whole team to write quality code that is readable, runs quickly and consistently, and provides valuable feedback!

You may be wondering why the title of this blog post ends with "...".  Be sure to check out next week's blog to read the other half of the story!  



Saturday, November 9, 2019

SQL Query Secrets

Have you ever been querying a SQL table, and one of your queries seems to take forever?  And then the next query you run takes milliseconds?  This would frequently happen to me, and I thought it meant that the server that hosted the database was unreliable in some way.  But this week I learned about indexes, and that the way we structure our queries has a huge impact on how long they will take to execute!  In this post, I'll describe what indexes are and talk about the ways we can use them to optimize our queries.



An index is a database structure that is designed to speed up queries in a table.  An easy way to understand this is to think about the index at the back of a book.  Let's say you have a book on car repair, and you want to find information about your car's brakes.  You could look up "brakes" in the index, or you could search through every single page of the book for the word "brakes".  It's pretty obvious which would take less time!

Unlike books, databases can have more than one index.  There are two different kinds of indexes: clustered and unclustered.  A clustered index is used to store a table in sorted order.  There can only be one clustered index, because the table is stored in only one order.  Unclustered indexes are stored in the original table order, but they save the location of certain fields in the table.

Let's take a look at an example.  If we had a table like this, called the Users table:

UserIdStateLastNameFirstNameEmailMobile Phone
1MAPrunewhipPrunellapprunewhip@fake.com800-867-5309
2RISchmoeJoejschmoe@notreal.com401-555-8765
3NHSmithAmyamysmith@foo.com603-555-3635
4RIJonesBobbob@bar.com401-555-2344
5MAJonesAmyaj@me.com617-555-2310

and we had a clustered index defined to have UserId as the key, a search on UserId would be very fast, and the data returned would be in order by UserId.

The table could also use unclustered indexes, such as the following:

State- the records in the table are indexed by state
LastNameFirstName- the records in the table are indexed by LastName and FirstName

When you query a database, the query will first look to see if an index can be used to speed up the search.  For example, if I made the request 
select LastName, FirstName from Users where UserId = 5 
the query would use the UserId index and the LastNameFirstName index to find the record.

Similarly, if I made the request
select LastName, FirstName from Users where State = 'MA'
the query would use the LastNameFirstName index and the State index to find the record.

Of course, with a table of only five records, optimizing in this way won't make much of a difference.  But imagine that this table had five million records, and you can see how using an index would be very helpful.

Querying a table on a non-indexed field is called a table scan.  The query needs to search through the entire table for the values, just as a person who wasn't using a book index would have to search through every single page of the book.  

How can you know what indexes a table has?  You can find out with one simple query:
EXEC sp_helpindex "Users" 
where you would replace "Users" with whatever the name of the table is.  This will return a result of all of the clustered and unclustered indexes applied to the table, and the result will include the name of the index, a description of the index, and all the keys used in the index.

If you want to optimize your SQL queries, only ask for the data that you really need, rather than asking for select *.  Because not every field in the table is indexed, looking for every field will take longer.  

Let's say that you want to query the Users table to find the email addresses of all of the users who live in Massachusetts (MA).  But you also would like to have some more information about those users.  You could ask for 
select FirstName, LastName, Email from Users where State = 'MA'.
To find the records, the query will use the FirstNameLastName index and the State index.  Only the Email will be a non-indexed field.

But if you asked for
select * from Users where State = 'MA'
now the query needs to look for two different non-indexed fields: Email and Mobile Phone.

Another helpful tip is to specify all the keys in an index when you want to use that index to make a query.  For example, if you wanted to find the Email for Prunella Prunewhip, you should ask for 
select Email from Users where LastName = 'Prunewhip' and FirstName = 'Prunella'
rather than asking for
select Email from Users where LastName = 'Prunewhip'.
In the second example, the LastNameFirstName index won't be used.

And when you want to use an index, the query will run faster if you specify the keys in the order they appear, so it's better to say
where LastName = 'Prunewhip' and FirstName = 'Prunella'
than it is to say
where FirstName = 'Prunella' and LastName = 'Prunewhip'

Here's one more tip: when you want to use an index, be sure not to manipulate one of the index keys in your query, because this will mean that the index won't be used.  For example, if you had a table like this, called Grades:

StudentIdLastNameFirstNameGrade
1MillerKara89
2SmithCarol56
3JonesBob99
4DavisFrank78
5GreenDoug65

and you had an unclustered index called LastNameGrade, and you executed a query like:
select LastName from Grades where (Grade + 100) = 178
the LastNameGrade index wouldn't be used, because the Grade value was being manipulated.  It's necessary for the query to go through the entire table and add 100 to each Grade field in order to search for the correct value.

Armed with this knowledge, you should be able to create queries that will run as fast as possible, getting you the data you need.  I'd like to extend my thanks to my colleague Cindy Gall, whose informative workshop inspired this post!

Saturday, November 2, 2019

Six Ways Chrome DevTools Can Help With Testing

Did you know that there is a wealth of testing tools right in your browser?  Web browsers like Chrome and Firefox have developer tools that are available for free, for everyone.  And these tools are not just for developers!  In this post, I'll be sharing six ways that Chrome DevTools can help you with your testing.



To access Chrome DevTools, simply click on the three-dot menu in the upper right corner of your browser, choose "More Tools", and then choose "Developer Tools".  DevTools will open up alongside your browser window.  You can customize where you would like the tools to display by clicking on the three-dot menu in the DevTools nav bar and selecting an option for "Dock Side".  You can choose to have the DevTools display on the left, on the right, on the bottom, or in a separate window.

Here are some of the things that Dev Tools can do:

1. Inspect an HTML Element
Have you ever been writing UI automation and you just can't figure out how to access an element?  With DevTools, you can right-click on the element and choose "Inspect", and the Elements pane of DevTools will show you the element in the HTML.  You can then use this information to figure out the best way to access the element.

2. Edit HTML Elements
Not only can you find an element in the HTML, you can also edit it!  This is great for security testing.  Imagine that there is a page with a button that is hidden for users who are not admins.  A malicious user could find that element using DevTools, remove the "hide" tag, and use the button.  So it's helpful to try this while testing to verify that there's an additional check for user permissions when the button is used.

To edit an element, right-click on it in the HTML displayed in the Elements pane, and choose "Edit as HTML".  Make whatever edits to the element you want, then click out of the edit box.  You should see the element on the page change as a result of your edits.

3. View HTTP requests
If you click on the Network tab of DevTools, you can see all of the requests made to the server while using a web page.  This includes API calls, which you can then copy and use in a tool like Postman.  This feature is helpful for determining if your page is making the API calls that you are expecting, and it's also great for security testing.  For example, just because the front-end of a web page doesn't allow a user to submit a field with more than 50 characters doesn't mean that it can't be done.  If a malicious user copies the API call and submits it through Postman, through a curl command, or through some other tool, they may be able to send more than 50 characters directly to the server.  This is why it's important to have both front-end and back-end validation on a website.

4. Simulate device frames
When you are testing a webpage, it's important to make sure that the page appears correctly on both laptops and mobile devices.  But even the most well-equipped tester doesn't have access to every single device in use today.  So DevTools comes with a simulator that shows roughly what your webpage will look like on various devices.  To access this feature, click on the device logo
in the toolbar.  This will open the simulator in the webpage side of the browser.  Then you can use the dropdown to select specific devices (which seem to be a bit obsolete), or you can choose the "Responsive" setting and then manually expand or contract the window to get the size you want.  The exact size is displayed in the navbar at the top.

5. Simulate performance on slower networks
Testing a webpage while in your office usually means you are using a great high-speed network.  But what about your users who have slower connections?  You can use DevTools to simulate slower connections and throttled CPU, which could help uncover race conditions in your application.  To use this feature, go to the Performance tab in the navbar.  In the Network dropdown, you can choose "Fast 3G", "Slow 3G", or "Offline", and in the CPU dropdown, you can choose "No throttling", "4x slowdown" or "6x slowdown".  Don't forget to reverse your changes when you are done testing!

6. Investigate page load errors
As I was creating this post, I was reminded of one more way that DevTools are helpful.  I was trying to upload the Chrome logo to my post, and the popup that I usually use to add an image was completely blank.  I went to the Console tab of DevTools and saw that there was a 404 "File not found" error when I clicked on the Add Images button in Blogger.  When you are testing your team's application and you've found a bug on a page, checking for errors in the console can help you give more information to your developers so they can get to the root of the problem more quickly.

Sometimes the most useful testing tools are right there in front of you!  I hope this post has inspired you to take a look at DevTools to see how it can help you in your testing.




Saturday, October 26, 2019

The Power of Not Knowing

Recently I saw a tweet from Ben Simo (@QualityFrog) that mentioned that he sometimes likes to practice what he calls "intentional ignorance"- where he doesn't read some of the documentation or code for a new feature to see what he can find while doing exploratory testing.  His tweet reminded me that I used to do this too!

I haven't done this in a while, because the team I work on is a great Agile team.  The testers are invited to the feature grooming sessions, each story has acceptance criteria written, and the developers do a feature handoff with the testers when each story is ready for testing.

But at previous companies, I was often given a story to test with no feature handoff and no acceptance criteria.  Sometimes the story wouldn't even have a description, and would have some cryptic title, like "Endpoint for search".  I would usually be annoyed by this, and I would ask for clarification, but I would first use it as an opportunity to do some exploratory testing while I had no pre-conceived notions what the feature could do or not do.   And while testing in this fashion, I would often find a bug, show it to the developer, and have him or her say, "Oh, it never even occurred to me to test the feature in that way."


Of course I don't want to go back to the days of cryptic story titles with no description!  But testing without knowing what the feature does can have some benefits:

  • You approach the application the same way a user would.  When your users see your new feature for the first time, they don't have the benefit of instructions.  By trying out the feature without knowing how it works, you could discover that an action button is hard to find, or that it's difficult to know what to do first on a multi-part form.  
  • You might try entering data that no one was expecting.  For example, there could be a form field where the date was supposed to be entered with month and day only, but you enter in the month, day, and year, which breaks the form.  
  • Without any instructions from the developer, you might think of other features to test the new feature with, besides those the developer thought of.  Those feature combinations might yield new bugs.

So how can we add these advantages back into our testing without skipping reading the acceptance criteria and having feature handoffs?  Here are a few ways:

  • Pair test with someone on another team.  At my company we have many teams, each of which often has no idea what the other teams are building.  Four times a year, the software testers get together in pairs where the two testers are from very different teams, and they swap applications and start testing.  This is a great way to find bugs and user experience issues!
  • When you start testing, spend some time just playing around with the new feature before writing a test plan.  By exploring in this way, you might come up with some unusual testing ideas.
  • After you've tested the acceptance criteria, take some time to think about what features might be used with the new feature.  What happens when you test them together?  For example, if you were testing a new page of data, you could test it with the global sort feature that already exists in your application.

Of course, there are also times where not knowing all the details about a feature is detrimental.  There have been times in my testing career where I tested a feature and completely missed something that the feature could do, because no one told me about it.  That's why I'm glad that we have acceptance criteria and feature handoffs.  But there are also times when not knowing can yield some of the most interesting bugs.

Saturday, October 19, 2019

Your Flaky Tests Are Destroying Trust


Anyone who has ever written an automated test has experienced test flakiness.  There are many reasons for flaky tests, including:
  • Environmental issues, such as the application being unavailable
  • Test data issues, where an expected value has been changed
  • UI issues, such as a popup window taking too long to appear


All of these reasons are valid explanations for flaky tests.  However, they are not excuses!  It should be your mission to have all of your automated tests pass every single day, except of course when an actual bug is present.

This is important not just because you want your tests to be reliable; it's important because when you have flaky tests, trust in you and in your team is eroded.  Here's why:

Flaky tests send the message that you don't care
Let's say you are the sole automation engineer on a team, and you have a bunch of flaky tests.  It's your job to write test automation that actually checks that your product is running correctly, and because your tests are flaky, your automation doesn't do that.  Your team may assume that this is because you don't care about whether your job is done properly.

Flaky tests make your team suspect your competence
An even worse situation than the previous example is one where your team simply assumes that you haven't fixed the flaky tests because you don't know how.  This further erodes their trust in you, which may spill over into other testing.  If you find a bug when you are doing exploratory testing your colleagues might not believe that you have a bug, because they think you are technically incompetent.

Flaky tests waste everyone's time
If you are part of a large company where each team contributes one part of an application, other teams will rely on your automation to determine whether the code they committed works with your team's code.  If your tests are failing for no reason, people on other teams will need to stop what they are doing and troubleshoot your tests.  They won't be pleased if they discover that there's nothing wrong with the app and your tests are just being flaky.

Flaky tests breed distrust between teams
If your team has a bunch of flaky tests that fail for no good reason, and you aren't actively taking steps to fix them, other teams will ignore your tests, and may also doubt whether your team can be relied upon.  In a situation like this, if Team B commits code and sees that Team A has failing tests, they may do nothing about it, and may not even ask Team A about the failures.  If there are tests that fail because there are real issues, your teams might not discover them until days later.

Flaky tests send a bad message your company's leadership 
There's nothing worse for a test team than to have test automation where only 80% (or less) of the tests pass on a daily basis.  This sends a message to management that either test automation is unreliable, or you are unreliable!

So, what can we do about flaky tests?  I'd like to recommend these steps:

1. Make a commitment to having 100% of your tests pass every day.  The only time a test should fail is if a legitimate bug is present.  Some might argue that this is an impossible dream, but it is one to strive for.  There is no such thing as perfect software, or perfect tests, but we can work as hard as we can to get as close as we can to that perfection.

2. Set up alerts that notify you of test failures.  Having tests that detect problems in your software doesn't help if no one is alerted when test failures happen.  Set up an alert system that will notify you via email or chat when a test is failing.  Also, make sure that you test your alert.  Don't assume that because the alert is in place it is automatically working.  Make a change that will cause a test to fail and check to see if you got the notification.

3. Investigate every test failure and find out why it failed.  If the failure wasn't due to a legitimate bug, what caused the failure?  Will the test pass if you run it again, or does it fail every time?  Will the test pass if you run it manually?  Is your test data correct?  Are there problems with the test environment?

4. Remove the flaky tests.  Some might argue that this is a bad idea because you are losing test coverage, and the test passes sometimes.  But this doesn't matter, because when people see that the test is flaky they won't trust it anyway.  It's better to remove the flaky tests altogether so that you demonstrate that you have a 100% passing rate, and others will begin to trust your tests.

An alternative would be to set the flaky tests to be skipped, but this might also erode trust.  People might see all the skipped tests and see them as a sign that you don't write good test automation.  Furthermore, you might forget to fix the skipped tests.

5. Fix all the flaky tests you can.  How you fix the flaky tests will depend on why they are flaky.  If you have tests that are flaky because someone keeps changing your test data, change your tests so that the test data is set up in the test itself.  If you have tests that are flaky because sometimes your test assets aren't deleted at the end of the test, do a data cleanup both before and after the test.

6. Ask for help.  If your tests are flaky because the environment where they are running is unreliable, talk to the team that's responsible for maintaining the environment.  See if there's something they can to do solve the problem.  If they are unresponsive, find out if other teams are experiencing the issue, and lobby together to make a change.

7. Test your functionality in a different way.  If your flaky test is failing because of some element on the page that isn't loading on time, don't try to solve the issue by making your waits longer.  See if you can come up with a different way to test the feature.  For example, you might be able to switch that test to an API test.  Or you might be able to verify that a record was added in the database instead of going through UI.  Or you might be able to verify the data on a different page, instead of the one with the slow element.

Some might say that not testing the UI on that problematic page is dangerous.  But having a flaky test on this page is even more dangerous, because people will just ignore the test.  It would be better to stick with an automated test that works, and do an occasional manual test of that page.

Quality Automation is Our Responsibility

We've all been in situations where we have been dismissed as irrelevant or incompetent because of the reputation of a few bad testers.  Let's create a culture of excellence for testers everywhere by making sure that EVERY test we run is reliable and provides value!

Saturday, October 12, 2019

Why You Should Be Testing in Production

This is a true story; I'm keeping the details vague to protect those involved.  Once there was a software team that was implementing new functionality.  They tested the new functionality in their QA environment, and it worked just fine.  So they scheduled a deployment: first to the Staging environment, then to Production.  They didn't have any automated tests for the new feature, because it was tricky to automate.  And they didn't bother to do any manual tests in Staging or Production, reasoning that if it worked in the QA environment, it must work everywhere.

You can probably guess what happened next- they started getting calls from customers that the new feature didn't work.  They investigated and found that this was true.  Then they tried out the feature in the Staging environment and found that it didn't work there either.  As it turned out, the team had used hard-coded configuration strings that were only valid in the QA environment.  If they had simply done ONE test in the Staging or Production environment, they would have noticed that something was wrong.  Instead, it was left to the customers to notice the problem.


There are two main reasons why things that work in a QA environment don't work in a Production environment:

1) Configuration problems- This is what happened with the team described above.  Software is complicated, and there are often multiple servers and databases that need to talk to each other in order for the software to work properly.  Keeping software secure means that each part of the application needs to be protected by passwords or other configuration strings.  If any one of those strings is incorrect, the software won't work completely.

2) Deployment problems- In this age of microservices, deploying software usually means deploying several different APIs.  In a large organization, there may be different teams responsible for different APIs.  For example, when a new feature in API A needs the new code in API B to work properly, API B will need to be deployed first.  It's possible that Team B will forget to deploy API B or not even realize that it needs to be deployed.  In cases like this, Team A might assume that API B had been deployed, and they will go ahead and deploy API A.  Without testing, Team A will have no way of knowing that the new feature isn't working.

By running tests in every environment, you can quickly discover if you have configuration or deployment problems.  It's often not necessary to go through extensive testing of a new feature in Production if you've already tested it in QA, but it is vital that you do at least SOME testing to verify that it's working!  We never want to have our customers find problems before we do.

Saturday, October 5, 2019

Confused? Simplify!

As testers, we are often asked to test complex systems.  Gone are the days when testers were simply asked to fill out form fields and hit the Submit button; now we are testing data stores, cloud servers, messaging services, and much more.  When so many building blocks are used in our software, it can become easy to get overwhelmed and confused.  When this happens, it's best to simplify what we are testing until our situation becomes clear.


Here's an example that happened recently on my team: we were testing that push notifications of a specific type were working on an iPhone.  One of my teammates was triggering a push notification, but it wasn't appearing on the phone.  What could be wrong?  Maybe notifications were completely broken.  Maybe they were broken on the iPhone.  Maybe only this specific notification was broken.  Maybe only notifications of this type were broken.  In a situation where there are a lot of notifications to test and we are working on a deadline, this can become very confusing. 

So, we simplified by asking a series of questions and running a test for each one.  We started with:
Is this push notification working on an Android phone?
We triggered the same notification to go to an Android phone, and the push was delivered.  So we ruled out that the notification itself was broken.

Next, we asked:
Is this push notification working on any other iPhone?
We triggered the same notification to go to a different iPhone, and the push was delivered.  So we ruled out that the notification was broken on iOS devices.

Then we asked:
Is ANY notification working on this specific iPhone? 
We triggered some different notifications to go to the iPhone, and no pushes were delivered.  So we concluded that the problem was not with the notification, or with the push service; the problem was with the phone.

In taking a step back and asking three simple questions, we were able to quickly diagnose the problem.  Let's take a look at another example, using my hypothetical feature called the Superball Sorter, which sorts small and large colored balls among four children, as described in this post.

Let's imagine that we are testing a scenario where we are sorting the balls by both size and color.  We have the children set up with the following rules:
Amy gets only large balls
Bob gets only small purple balls and large red balls
Carol gets only small balls
Doug gets only green balls

When we run the sorter, a small purple ball is next in the sorting process, and it's Bob's turn to get a ball.  We are expecting that Bob is going to get the small purple ball because his sorting rules allow it, but he doesn't get the ball- it goes to Carol instead.  What could be wrong here?  Maybe Bob isn't getting any balls.  Maybe the purple ball isn't being sorted at all.  Maybe only the small balls aren't being sorted.  How can we figure out what is going on?

Our first question will be:
Can Bob get ANY sorted balls?  
We'll set up the sorter so Amy, Carol, and Doug only get large balls, and Bob only gets small balls.  We run the sorter, and Bob gets all the small balls.  So we know this isn't the problem.

Can anyone get the small purple ball?
Next, we'll set up the sorter so that Amy will only get small purple balls, and Bob, Carol, and Doug can get any ball at all.  We'll set up our list of balls so that the small purple ball is first on the list.  When we start our sorting process with Amy, she gets the small purple ball.  So now we know that the small purple ball isn't the problem.

Can Bob get the small purple ball in some other scenario?
We saw in our initial test that Bob wasn't getting the small purple ball, but can he EVER get that ball?  We'll set up our rules so that Amy will only get large balls, and Bob will get only small purple balls.  We won't give Carol and Doug any rules. Then we'll set up our list of balls so that the small purple ball is first on the list.  Amy won't get the small purple ball, because she only gets large balls, so the small purple ball is offered to Bob.  He gets the ball, so now we know that Bob can get the small purple ball in some scenarios.

At this point, we know that the problem is not the small purple ball.  What is different between the original scenario and the one we just ran?  One difference is that in the original scenario, all four children had a rule.  So let's ask this question:

Can Bob get the small purple ball when it's his only rule, and the other children all have rules?
We'll set up the rules like this:
Amy gets only large balls
Bob gets only small purple balls
Carol gets only small balls
Doug gets only green balls
We again set up our list of balls so that the small purple ball is first on the list.  The ball skips Amy, because it doesn't meet her rule, and Bob gets the ball.  So now we know that the problem is not that all the children have rules.  So now the next logical question is:

What happens when Bob has TWO rules?
We'll set up the rules like this:
Amy gets only large balls
Bob gets only small purple balls and small yellow balls
Carol gets only small balls
Doug gets only green balls

Our list of balls is the same, where the small purple ball is first.  This time, the ball skips Amy AND Bob, and Carol gets the small purple ball.

AHA!  Now we have a good working theory: when Bob has two rules, the sorting is not working correctly.  We can test out this theory by giving another child two rules, while giving everyone else one rule.  Are the balls sorted correctly?  What about when a child has two rules that specify color only and not size?  Will the two rules work then?  By continuing to ask questions, we can pinpoint precisely what the bug is.

By making your tests as simple as possible, you are able to narrow down the possibilities of where the bug is.  And by proceeding methodically and logically, you will be able to find that bug as quickly as possible, even in a very complex system.  



Thursday, September 26, 2019

Toggles, Revisited

A few years ago, I wrote a blog post detailing why I thought toggles were a bad idea.  It made a clever analogy between toggles and the tribbles on Star Trek's U.S.S. Enterprise.  I think it's a fun read, so you may want to check it out; but since the time I wrote it, my opinion has changed a bit. In this post I will explain why I think toggles may be helpful, and I'll propose some rules for their use.


About a year ago, my team was working on a new notification service that would send out emails and messages more efficiently than the current service.  When the new service was ready, we migrated one notification type to the new service to see how it would work.  We tested the notification extensively and we were sure that we had accounted for all scenarios, so we took the new service to Production.

A couple of weeks later, we discovered that there was an odd case that we hadn't tested.  If two users in the same company had the same id, the wrong user was getting the notification.  We had no idea that it was possible for two users in the same company to have the same id, so we hadn't thought to test this.

Fortunately, our new service was behind a toggle.  Since we certainly didn't want the wrong people to get notifications, we quickly toggled off the new service.  There was no impact to any other customers, because they were still getting their notifications; they were just being notified through the old service.  We were able to quickly fix the bug, get the fix into Production, and toggle the service back on.

If we hadn't had the toggle, the users with the same id would have continued to get the wrong notifications until we were able to fix the bug.  We would have had to rush to get a code patch into Production, and it's possible that we would have made mistakes along the way.  Because we had the toggle, we could take the time to make sure that the fix was good, and we could do all the regression testing we wanted.

So, I've changed my mind about toggles.  I think they can be useful in situations where there's a significant risk that accompanies a change.  But if you are going to use toggles, please observe the following rules:

1. Toggles are NOT a substitute for high-quality testing.  Being able to toggle something off at the first sign of trouble does not mean that you can skip testing your new feature thoroughly.  Ideally you should have tested so well that you never need to turn your toggle off.

2. Make sure to test your feature with the toggle on AND with the toggle off.  You don't want to discover in the middle of dealing with a problem in Production that the toggle doesn't actually work!

3. When the feature has gone to Production and a certain amount of time has passed, remove the toggle so that the feature is on permanently.  Otherwise you could get into a situation where months from now someone inadvertently toggles the feature off.  And the fewer toggles you have in your application, the fewer combinations of toggles you need to test.

As with many things in software development, the best strategies are those that ensure the best possible outcome for our end users.  When they are used wisely, toggles can help mitigate any unexpected issues found in Production.

Saturday, September 21, 2019

What I Learned at POST/CON Part II: Assertions and Scripts Everywhere!

Last week, I wrote about how I had just returned from the annual Postman users' conference, and how I was so excited about everything I had learned there!  I'm still talking to anyone who will listen about all the great things Postman can do.  In this week's post, I'm going to show you how you can create variables, assertions, and headers for collections and folders.


Those of you who are familiar with Postman or who have read my previous blog posts on the subject know that a Postman collection is simply a group of requests.  Requests in a collection can also be grouped into folders.  Here's an example of a collection with more than one folder:


The name of the collection is "Contact List", and it has three folders in it: "Happy Path", "Required and Null Fields", and "Sad Path".  Each of the folders has requests in it, but currently only the "Happy Path" folder is open so you can view the requests.

If I hover over the Contact List collection name, I'll see a three-dot menu.  I can click on this menu icon and choose Edit.  When the Contact List editor window appears, it looks like this:


Notice that there are tabs for Authentication, Pre-request Scripts, Tests, and Variables.  If I want to add a collection-level variable, I can simply click on the variables tab and enter in my variable name and value.  We can do something similar to add an authorization token, a pre-request script, or a test.

We can do the same thing at the folder level.  There is also a three-dot menu to the right of the "Happy Path" folder, and if I hover over either of the two other folders I'll see the three-dot menu there as well.  If I click on the three-dot menu next to the "Happy Path" folder, and choose "Edit", I'll be presented with this window:


Looks familiar, doesn't it?  The only difference between this folder window and the collection window is that there is no place to add variables.  Here I can add authentication, pre-request scripts, and tests, just as I could at the collection level or request level.

Why is this so helpful?  

Putting your authentication, pre-request scripts, and tests at the collection or folder level is helpful because it keeps you from having to type the same things again and again!

Here are four examples of how you can use this feature:

1. Assert on response time at the collection level

You may have a service-level agreement (SLA) on your API that states that the consumers of your API should get a response within a certain number of milliseconds.  Even if you don't have an SLA, you probably want to be alerted if requests that used to take two milliseconds are now taking ten seconds to run.  But to copy and paste this assertion into every request is time consuming!  Instead you can put the assertion at the collection level, like this:


Now this response-time assertion will run with every single request in your collection.

2. Move your variables out of your environments and into your collections

You probably test your APIs in more than one environment, such as Dev, QA, Staging, and Production.  Each environment probably has some variables that differ between each environment, such as a URL value.  But there are probably many variables that stay the same in each environment, and these variables can be put at the collection level to avoid repetition.  Let's look at an example.  Let's say I have a set of variables for my QA environment:


And I have another set of variables for my Prod environment:


When you examine the two environments, you can see that the only variable that is different between the two is the URL.  So why not take the firstName, lastName, email, and phone variables and put them in the Collection variables instead?


Now you can remove all the repetitive variables from your environments, making them much easier to maintain.

IMPORTANT NOTE!  When you move your variables from an environment to your collection, you will need to reference them differently in your assertions.  Instead of:

pm.expect(jsonData.firstName).to.eql(environment.firstName);

You will need to use:

pm.expect(jsonData.firstName).to.eql(pm.variables.get("firstName"));

3. Set authentication at the collection level

Much of what I test with APIs requires an authentication token.  It's a pain to have to add authentication to the header on every request.  If the token you are using will be the same throughout your collection, you can set the authentication at the collection level instead. 

Here's an example, using Mark Winteringham's awesome Restful-Booker API.  Some of the requests in this API require a token, using this format:


Where {{cookie}} is the token that I've saved as a variable.  I can set the authentication at the collection level like this:


And that header will be sent with every request I make.  Note that there are many different types of authentication, so you'll need to modify your collection settings to use the right type for your API. 

4. Use a pre-request script to create a variable at the folder level

Suppose you have a folder with requests that will all require a randomly-generated GUID, and you want the GUID to be different for each request.  Rather than put instructions for generating a GUID in the pre-request script section of every single request, you can put the instructions at the folder level, like this:


This script will run before every request in the folder and will assign a randomly-generated GUID to the variable "id", ensuring that the id will be different for each request.

These examples are just some of the things you can do at the collection and folder levels.  I hope you will use these as a starting point to making your Postman tests more efficient and maintainable!

New Blog Location!

I've moved!  I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted.  So I&#...