Email Subscription Form

Saturday, December 28, 2019

New Year's Resolutions for Software Testers

I love New Year's Day!  There's something exciting about getting a fresh start and imagining all that can be accomplished in the coming year.  The new year is an opportunity to think about how we can be better testers, how we can share our knowledge with others, and how we can continue to improve the public perception of the craft of software testing.

Image by <a href="https://pixabay.com/users/wonderwoman627-1737396/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=4701494">M Harris</a> from <a href="https://pixabay.com/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=4701494">Pixabay</a>


Here are some suggestions for resolutions you could make to improve your testing and the testing skills of those around you:

Speak Up
Because testers are sometimes made to feel like second-class citizens compared to software developers, they might feel timid about voicing their opinions.  But testers often know more about the product they test than the developers, who are usually working in one small area of the application.  This year, resolve to speak up if you see an issue with the product that you think would negatively impact the end user, even if it isn't a "bug".  Similarly, speak up if you find a bug that the team has dismissed as unimportant, and state why you think it should be fixed.  Advocate for your user!  Make sure that the product your customers are getting makes sense and is easy to use.

Pay Attention in Product Meetings
I'm sure my Product Owner would be sad to read this (sorry, Brian!) but I find product meetings boring.  I know that the small details of the user's experience are important, and I'm so glad that there are people who care about where a notification badge is displayed.  But listening to the discussion where that decision is being made is not very exciting to me.  However, I am so glad that I am included in these meetings, and every year I resolve to pay more attention to product decision-making than I did the year before, and to contribute when I have information that I think will be helpful.  Attending product meetings allows me to hear why certain choices are made, and also helps me think about what I need to test when a new feature comes available.

Do Some Exploratory Testing
I suspect that most of us have some area of the application we test where we have a sneaking suspicion that things aren't working quite right.  Or there's a really old area of the application that no one knows how to use, because the people who initially built and tested it have since left the company.  But we are often too busy testing new features and writing test automation to take the time to really get to know the old and confusing areas of an application.  This year, resolve to set aside a few hours to do exploratory testing in those areas and share your findings with the team.  You may find some long-buried bugs or features that no one knows about!

Streamline Your Operation
Are there things your team does that could be done more efficiently?  Perhaps you have test automation that uses three different standards to name variables, making the variable names difficult to remember.  Perhaps your methods of processing work items isn't clear, so some team members are assigning testing tickets while others are leaving them for testers to pick up.  Even if it seems like a small problem, these types of inefficiencies can keep a team from moving as quickly as it could.  Resolve to notice these issues and make suggestions for how they can be improved.

Learn Something New
This year, learn a new tool or a new language.  You don't have to become a master user; just learn enough to be able to say why you are using your current language or tool over the new one you've learned.  Or you could discover that the new language or tool suits your needs better, in which case you can improve your test automation.  Either way, learning something new makes you more employable the next time you are looking for a new position.

Share Your Knowledge With Your Team
Don't be a knowledge hoarder!  Your company and your software will be better when you share your knowledge about the product you are testing and the tools you are using to test it.  Sometimes misguided people hold on to knowledge thinking it will make them indispensable.  This will not serve to keep you employed.  In today's world, sharing information so that the whole team can be successful is the best way to be noticed and appreciated.  Resolve to hold a workshop for the other testers on your team about the test automation you are writing, or create documentation that shows everyone how to set up a tricky test configuration.  Your teammates will thank you!

Share Your Knowledge With the Wider World
If I had one wish for software testers for the year 2020, it would be that we would be seen by the wider tech community as the valuable craftspeople we are.  If you are an awesome software tester- and I'm guessing you are because you are taking the time to read a blog about testing- share your skills with the world!  Write a blog post, help someone on Stack Overflow, or present at a local testing meetup.  You don't have to be the World's Most Authoritative Expert on whatever it is you are talking about, nor do you have to be the Best Speaker in the World.  Just share the information you have freely!  We will all benefit from your experience.

What New Year's resolutions do you have for your software testing?  Please share in the comments below!




Saturday, December 21, 2019

A Question of Time

Time is the one thing of which everyone gets the same amount.  Whether we are the CEO of a company or we are the intern, we all have 1440 minutes in a day.  I've often heard testers talk about how they don't have enough time to test, and that can certainly happen when deadlines are imposed without input from everyone on the team.  I've written a blog post about time management techniques for testers, but today I'm going to tackle the question:

Is it worth my time to automate this task?



Sometimes we are tempted to create a little tool for everything, just because we can.  I usually see this happen with developers more than testers, but I do see it with some testers who love to code.  However, writing code does not always save us time.  When considering whether to do a task manually or to write automation for it, ask yourself these four questions:

1. Will I need to do this task again?

Recently my team was migrating files from one system to another system.  I ran the migration tool manually and did manual checking that the files had migrated properly.  I didn't write any automation for this, because I knew that I was never going to need to test it again.

Contrast this with a tester from another team who is continually asked to check the UI on a page when his team makes updates.  He got really tired of doing this again and again, so he created a script that will take screenshots and compare the old and new versions of the page.  Now he can run the check with the push of a button.

2. How much time does this task take me, and how much time will it take me to write the code?

Periodically my team's test data gets refreshed, and that means that the information we have for our test users sometimes gets changed.  When this happens, it takes about eight hours to manually update all the users.  It took me a few hours to create a SQL script that would update the users automatically, but it was totally worth my time, because now I save eight hours of work whenever the data is refreshed.

But there have been other times where I've needed to set up some data for testing, and a developer has offered to write a little script to do it for me.  Since I can usually set up the data faster than they can create the script, I decline the offer.

3. How much time will it take to maintain the automation I'm writing?

At a previous job, I was testing email delivery and I wanted to write an automated test that would show that the email had actually arrived in the Gmail test account.  The trouble was that there could be up to a ten minute delay for the email to appear.  I spent a lot of time adjusting the automated test to wait longer, to have retries, and so on, until finally I realized it was just faster for me to take that assertion out of the test, and manually check the email account from time to time.

However, my team's automated API smoke tests take very little time to maintain, because the API endpoints change so infrequently that the tests rarely need to change.  The first API smoke test I set up took a few days; but once we had a working model it became very easy to set up tests for our other APIs.

4. Does the tool I'm creating already exist?

At a previous company, the web team was porting over many customers' websites from one provider to another.  I was asked to create a tool that would crawl through the sites and locate all the pages, and then crawl through the migrated site to make sure all the pages had been ported over.  It was really fun to create this tool, and I learned a lot about coding in the process.  However, I discovered after I made the tool that web-crawling software already exists!

But in that particular month I did have the time to create the tool, and the things I learned helped me with my other test automation.  So sometimes it may be worth "reinventing the wheel" if it will help you or your team.

The Bottom Line: Are you saving or wasting time?

All of these questions come down to one major consideration, and that is whether your task is saving or wasting time.  If you are a person who enjoys coding, you may be tempted to write a fun new script for every task you need to do; but this might not always save you time.  Similarly, if you don't enjoy coding, you might insist on doing repetitive tasks manually; but using a simple tool could save you a ton of time.  Always consider the time-saving result of your activities!

Saturday, December 7, 2019

Measuring Quality

The concept of measuring quality can be a hot-button topic for many software testers.  This is because metrics can be used poorly; we've all heard stories about testers who were evaluated based on how many bugs they found or how many automated tests they wrote.  These measures have absolutely no bearing on software quality. A person who finds a bug in three different browsers can either write up the bug once or write up a bug for each browser; having three JIRA tickets instead of one makes no difference in what the bug is!  Similarly, writing one hundred automated tests where only thirty are needed for adequate test coverage doesn't ensure quality and may actually slow down development time.

But measuring quality is important, and here's why: software testers are to software what the immune system is to the human body.  When a person's immune system is working well, they don't think about it at all.  They get exposed to all kinds of viruses and bacteria on a daily basis, and their immune system quietly neutralizes the threats.  It's only when a threat gets past the immune system that a person's health breaks down, and then they pay attention to the system.  Software testers have the same problem: when they are doing their job really well, there is no visible impact in the software.  Key decision-makers in the company may see the software and praise the developers that created it without thinking about all the testing that helped ensure that the software was of high quality.



Measuring quality is a key way that we can demonstrate the value of our contributions.  But it's important to measure well; a metric such as "There were 100 customer support calls this month" means nothing, because we don't have a baseline to compare it to.  If we have monthly measurements of customer support calls, and they went from 300 calls in the first month, to 200 calls in the second month, to 100 calls in the third month, and daily usage statistics stayed the same, then it's logical to conclude that customers are having fewer problems with the software.

With last week's post about the various facets of quality in mind, let's take a look at some ways we could measure quality.

Functionality:
How many bugs are found in production by customers?
A declining number could indicate that bugs are being caught by testers before going to production.
How many daily active users do we have? 
A rising number probably indicates that customers are happy with the software, and that new customers have joined the ranks of users.

Reliability:
What is our percentage of uptime?  
A rising number could show that the application has become more stable.
How many errors do we see in our logs?  
A declining number might show that the software operations are generally completing successfully.

Security:
How many issues were found by penetration tests and security scans?  
A declining number could show that the application is becoming more secure.

Performance:
What is our average response time?
A stable or declining number will show that the application is operating within accepted parameters.

Usability:
What are our customers saying about our product?
Metrics like survey responses or app store ratings can indicate how happy customers are with an application.
How many customer support calls are we getting?
Increased support calls from customers could indicate that it's not clear how to operate the software.

Compatibility:
How many support calls are we getting related to browser, device, or operating system?
An increased number of support calls could indicate that the application is not working well in certain circumstances.
What browsers/devices/operating systems are using our software?
When looking at analytics related to app usage, a low participation rate by a certain device might indicate that users have had problems and stopped using the application.

Portability:
What percentage of customers upgraded to the new version of our software?
Comparing upgrade percentages with statistics of previous upgrades could indicate that the users found the upgrade process easy.
How many support calls did we get related to the upgrade?
An increased number of support calls compared to the last upgrade could indicate that the upgrade process was problematic.

Maintainability:
How long does it take to deploy our software to production?
If it is taking longer to deploy software than it did during the last few releases, then the process needs to be evaluated.
How frequently can we deploy?
If it is possible to deploy more frequently than was possible six months ago, then the process is becoming more streamlined.

There's no one way to measure quality, and not every facet of quality can be measured with a metric.  But it's important for software testers to be able to use metrics to demonstrate how their work contributes to the health of their company's software, and the above examples are some ways to get started.  Just remember to think critically about what you are measuring, and establish good baselines before drawing any conclusions.

New Blog Location!

I've moved!  I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted.  So I...