Email Subscription Form

Saturday, August 25, 2018

Introduction to Performance Testing

Performance Testing, like many other phrases associated with software testing, can mean different things to different people.  Some use the term to include all types of tests that measure an application's behavior, including load and stress testing.  Others use the term to mean the general responsiveness of an application under ordinary conditions.  I will be using the latter definition in today's post.

Performance Testing measures how an application behaves when it is used.
This includes reliability:
  • Does the page load when a user navigates to it?  
  • Does the user get a response when they make a request?
and speed:
  • How fast does the page load?  
  • How fast does the user get a response to their request?
Depending on the size of the company you work for, you may have performance engineers or DevOps professionals who are already monitoring for metrics like these.  But if you work for a small company, or if you simply like to be thorough in your testing, it's worth learning how to capture some of this data to find out how well your application is behaving in the real world.  I know I have stopped using an application simply because the response time was too slow.  You don't want your end users to do that with your application!

Here are five different ways that you can monitor the health of your application:

Latency- this is the time that it takes for a request to reach a server and return a response.  The simplest way to test this is with a ping test.  You can run a ping test from the command line on your laptop, simply by entering the word ping, followed by a website's URL or an IP address.  For example, you could run this command


And get a response something like this:

(To stop a ping test, simply use CTRL-C)

Let's take a look at the response times.  Each ping result shows how long it took in milliseconds to reach that server and return a response.  At the bottom of the test results, we can see the minimum response time, the average response time, the maximum response time, and the standard deviation in the response time.  In this particular example, the slowest response time was 23.557 milliseconds.

API Response Time- this is a really helpful measurement, because so many web and mobile applications are using APIs to request and post data.  Postman (my favorite tool for API testing) has response time measurements built right into the application.  When you run a request, you will see a Time entry right next to the Status of the response:

Here we can see that the GET request I ran took 130 milliseconds to return a response.  You can include an assertion in your Postman tests which will verify that your response was returned in less than a selected time, such as 200 milliseconds.  The assertion will look like this:

pm.test("Response time is less than 200ms", function () {

(If you would like to learn more about API testing with Postman, I have several blog posts on this topic.)

Another great tool for testing API response time is Runscope.  It is easy to use, and while it does not offer a free version, it has a very helpful feature: you can make API requests from locations all over the world, and verify that your response times are good no matter where your users are.  You can easily set up automated checks to run every hour or every day to make sure that your API is up and running.  Runscope also offers real-time monitoring of your APIs, so if your application suddenly starts returning 500 errors for some users, you will be alerted.

Web Response Time- Even if your API is responding beautifully, you'll also want to make sure that your web page is loading well.  There's nothing more frustrating to a user than sitting around waiting for a page to load!  There are a number of free tools that you can use to measure how long it takes your application's pages to render.  With Pingdom, you can enter your website's URL and it will crawl through your application, measuring load times.  Here are the results I got when I used my website's URL and requested that it be tested from Melbourne, Australia:

Pingdom also provided suggestions for improving my site's performance, such as adding browser caching and minimizing redirects.  Paid customers can also set up monitoring and alerting, so you can be notified if your page loading times slow down.  

Mobile Application Monitoring- If you have a native mobile application, you'll want to make sure that it's responding correctly and quickly.  Crashlytics is free software that can be added to your app to provide statistics about why your app crashed.  New Relic offers paid mobile monitoring for your app, allowing you to see data about which mobile devices are working well with your app and which might be having problems.

Application Performance Monitoring (APM) Tools- For more advanced monitoring of your application, you can use an APM tool such as ElasticSearch or Sharepath.  These tools track every single transaction your application processes and can provide insights on CPU usage, server response times, and request errors.  

Whether you work for a big company with a web application that has millions of users, or a small startup with one little mobile app, performance testing is important.  It may mean the difference between happy customers who keep using your application, and disaffected users who uninstall it.

Friday, August 17, 2018

Mobile Testing Part IV: An Introduction to Mobile Security Testing

Mobile security testing can be problematic for a software tester, because it combines the challenges of mobile with the challenges of security testing.  Not knowing much about mobile security testing, I did some research this week, and here are some of the difficulties I discovered:

  • Mobile devices are designed to be more secure than traditional web applications, because they are personal to the user.  Because of this, it's harder to look "under the hood" to see how an application works.  
  • Because of the above difficulty, mobile security testing often requires tools that the average tester might not have handy, such as Xcode Tools or Android Studio.  Security testing on a physical device usually means using a rooted or jailbroken phone.  (A rooted or jailbroken phone is one that is altered to have admin access or user restrictions removed.  An Android phone can be rooted; an iPhone can be jailbroken. You will NOT want to do this with your personal device.)
  • It's difficult to find instructions for mobile security testing when you are a beginner; most documentation assumes that you are already comfortable with advanced security testing concepts or developing mobile applications.

I'm hoping that this post will serve as a gentle introduction for testers who are not already security testing experts or mobile app developers.  Let's first take a look at the differences between web application security testing and mobile app security testing: 
  • Native apps are usually built with the mobile OS's development kit, which has built-in features for things like input validation, so SQL injection and cross-site scripting vulnerabilities are less likely.
  • Native apps often make use of the data storage capabilities on the device, whereas a web application will store everything on the application's server.
  • Native apps will be more likely than web applications to use biometric data, such as a fingerprint, for authentication.

However, there are still a number of vulnerabilities that you can look for in a mobile app that are similar to the types of security tests you would run on a web application.  Here are some examples:

  • For apps that require a username and password to log in, you can check to make sure that a login failure doesn't give away information.  For example, you don't want your app to return the message "invalid password", because that lets an intruder know that they have a correct username.
  • You can use a tool such as Postman to test the API calls that the mobile app will be using and verify that your request headers are expected to use https rather than http.
  • You can test for validation errors. For example, if a text field in the UI accepts a string that is longer than what the database will accept, this could be exploited by a malicious user for a buffer overflow attack.
If you are ready for a bigger testing challenge, here are a couple of mobile security testing activities you could try:
  • You can access your app's local data storage and verify that it is encrypted.  With Android, you can do this with a rooted phone or an Android emulator and the Android's ADB (Android Debug Bridge) command line tool.  With iPhone, you can do this with Xcode Tools and a jailbroken phone or an iPhone simulator.  
  • You can use a security testing tool such as Burp Suite to intercept and examine requests made by the mobile app.  On Android, unless you have an older device running the Lollipop OS or earlier, you'll need to do this with an emulator.  On iPhone, you can do this with a physical device or a simulator.  In both instances, you'll need to install a CA certificate on the device that allows requests to be intercepted.  This CA certificate can be generated from Burp Suite itself.  
These two testing tasks can prepare you to be a mobile security testing champion!  If you are ready to learn even more, I recommend that you check out the online book OWASP Mobile Security Testing Guide.  This is the definitive guide to making sure that your application is free of the most common security vulnerabilities.  Happy hacking!

Saturday, August 11, 2018

Mobile Testing Part III: Seven Automated Mobile Testing Tips (and Five Great Tools)

Walk into any mobile carrier store and you will see a wide range of mobile devices for sale.  Of course you want to make sure that your application works well on all of those devices, in addition to the older devices that some users have.  But running even the simplest of manual tests on a phone or tablet takes time.  Multiply that time by the number of devices you want to support, and you've got a huge testing burden!

This is where automated mobile testing comes in.  We are fortunate to be testing at a time where there are a whole range of products and services to help us automate our mobile tests.  Later in this article, I will discuss five of them.  But first, let's take a look at seven tips to help you be successful with mobile automated testing.

Tip 1: Don't test things on mobile that could be more easily tested elsewhere

Mobile automation is not the place to test your back-end services.  It's also not the place to test the general logic of the application, unless your application is mobile only.  Mobile testing should be used for verifying that elements appear correctly on the device and function correctly when used.  For example, let's say you have a sign-up form in your application.  In your mobile testing, you'll want to make sure that the form renders correctly, that all fields can be filled in, that error messages display appropriately, and that the Save button submits the form when clicked on.  But you don't want to test that the error message has the correct text, or that the fields have all saved correctly.  You can save those tests for standard Web browser or API automation.

Tip 2: Decide whether you want to run your tests on real devices or emulators

The advantage of running your tests on real devices is that the devices will behave like the devices your users own, with the possibility of having a low battery, connectivity issues, or other applications running.  But because of this, it's more likely that your tests will fail because a phone in the device farm froze up or was being used by another tester.  Annoyances like these can be avoided by using emulators, but emulators can't completely mimic the real user experience.  It's up to you decide which choice is more appropriate for your application.  

Tip 3: Test only one thing at a time

Mobile tests can be flaky, due to the issues found in real devices discussed above and other issues such as the variations found in different phones and tablets.  You may find yourself spending a fair amount of time taking a look at your failed tests and diagnosing why they failed.  Because of this, it's a good strategy to keep your tests small.  For example, if you were testing a login screen, you could have one test for a successful login and a second test for an unsuccessful login, instead of putting both scenarios into the same test.

Tip 4: Be prepared to re-run tests

As mentioned in Tip 3, you will probably encounter some flakiness in your mobile tests.  A test can fail simply because the service that is hosting the emulator loses connectivity for a moment.  Because of this, you may want to set up a system where your tests run once and then re-run the failed tests automatically.  You can then set up an alert that will notify you only if a test has failed twice.

Tip 5: Don't feel like you have to test every device in existence

As testers, we love to be thorough.  We love to come up with every possible permutation in testing and run through them all.  But in the mobile space, this can quickly drive you crazy!  The more devices you are running your automated tests on, the more failures you will have.  The more failures you have, the more time you have to spend diagnosing those issues.  This is time taken away from new feature testing or exploratory testing.  Do some research on which devices your users own and come up with a list of devices to test with that covers most, but not all, of those devices.  

Tip 6: Take screenshots

Nothing is more frustrating than seeing that a test failed and not being able to figure out why.  Screenshots can help you determine if you were on the correct screen during a test step and if all the elements are visible.  Some mobile testing companies take a screenshot of every test step as the test progresses.  Others automatically take a screenshot of the last view before a test fails.  You can also code your test to take screenshots of specific test steps.  

Tip 7: Use visual validation

Visual validation is essential in mobile testing.  Many of the bugs you will encounter will be elements not rendering correctly on the screen.  You can test for the presence of an element, but unless you have a way to compare a screenshot with one you have on file, you won't really be verifying that your elements are visible to the user.  Applitools makes an excellent product for visual comparison.  It integrates with common test software such as Selenium, Appium, and Protractor.  With Applitools, you can build visual verification right into your tests and save a collection of screenshots from every device you test with to use for image comparison. 

Now let's discuss some good tools for test automation.  I've already mentioned Applitools; below are four other tools that are great for mobile test automation.  The mobile landscape is filled with products for automated testing, both open-source and paid.  In this post, I am discussing only the products that I have used; there are many other great products out there. 

Visual Studio App Center:  A Microsoft product that allows you to test Android and iOS applications on real devices.  A screenshot is taken of every test step, which makes it easy to figure out where a test started to go wrong. 

Appium:  An open-source product that integrates with Selenium and provides the capability to test on device emulators (or real devices if you integrate with a device farm). 

Sauce Labs:  Sauce Labs is great for testing on both mobile devices and web browsers on all kinds of operating systems.  You can run tests on real devices or emulators, and you can run tests in parallel.  They integrate well with Selenium and Appium.  A screenshot is taken whenever a test fails, and you can even watch a video of your test execution.

Perfecto: Uses real devices and integrates with Visual Studio, Appium, and Selenium.  They can simulate real-world user conditions such as network availability and location.

Whatever automated test tools you choose to use, remember the tips above, and you will ensure that you are comprehensively testing your application on mobile without spending a lot of time debugging. 

I initially said this series on Mobile Testing was going to be three blog posts long.  On reflection, I've realized that we need a fourth post: Mobile Security Testing.  This is a topic I know very little about.  So I'll be doing some research, and you can expect Mobile Testing Part IV from me next week!

Saturday, August 4, 2018

Mobile Testing Part II: Manual Mobile Testing

I am a firm believer that no matter how great virtual devices and automated tests are, you should always do some mobile testing with a physical device in your hand.  But none of us has the resources to acquire every possible mobile device with every possible carrier.  So today's post will discuss how to assemble a mobile device portfolio that meets your minimum testing criteria, and how to get your mobile testing done on other physical devices.  We'll also talk about the manual tests that should be part of every mobile test plan.

Every company is different, and will have a different budget available for acquiring mobile devices.  Here is an example of how I would decide on which phones to buy if I was allowed to purchase no more than ten.  I am based in the United States, so I would be thinking about US carriers.  I would want to make sure that I had at least one AT&T, Verizon, T-Mobile, and Sprint device in my portfolio.  I would also want to have a wifi-only device.  I would want to make sure that I had at least one iOS device and at least one Android device.  For OS versions, I'd want to have the both the latest OS version and the next-to-latest OS version for each operating system.  For Android devices, I'd want to have Samsung, LG, and Motorola represented, because these are the most popular Android devices in the US.  Finally, I would want to make sure that I had at least one tablet for each operating system.

With those stipulations in mind, I would create a list of devices like this:

In this portfolio, we have three iOS devices and six Android devices.  All four carriers I wanted are represented, and we have one wifi only device.  We have three tablets and six smartphones.  We have the latest iOS and Android versions, and the next-to-latest versions.  And we also have a variety of screen sizes.  It's easy to modify a device plan like this if for some reason devices aren't available.  For example, if I went to purchase these devices and found that Sprint wasn't carrying the iPhone X, I could easily switch my plan around so I could get an iPhone X from AT&T and an iPhone 8 Plus from Sprint instead. 

The benefit of having a physical device portfolio is that you can add to it every year as your budget allows. Each year you can purchase a new set of devices with the latest OS version, and you can keep your old devices on the older OS versions, thus expanding the range of OS versions you can test with.

Once you have a device portfolio, you'll need to make sure you are building good mobile tests into your test plans.  You can add the following tests:

  • Test the application in the mobile browser, in addition to testing the native app
  • Test in portrait and landscape modes, switching back and forth between the two
  • Change from using the network to using wifi, to using no service, and back again
  • Test any in-app links and social media features
  • Set the phone or device timer to go off during your testing
  • Set text messages or low battery warnings to come in during your testing

What about testing on the dozens of devices that you don't have?  This is where device farms come in.  A device farm is made of many physical devices housed in one location that you can access through the Web.  From your computer, you can access the device controls such as the Home or Back buttons, swipe left and right on the screen, and click on the controls in your application.  You may even be able to do things like rotate the device and receive a phone call.  With a device farm, you can expand the range of devices you are testing on.  Good ideas for expanding your test plan would be adding devices with older OS versions, and adding devices from manufacturers that you don't have in your portfolio.  In my case above, this might mean adding in HTC and Huawei devices.  

For manual device farm testing, I have had good experiences with Perfecto.  Other popular device farms with manual testing capabilities are AWS, Sauce Labs, and Browserstack.

You may be saying to yourself at this point, "You've got some great devices and carriers for US testing, but my users come from all over the world.  How can I make sure that they are all having a good user experience with my app?"  This is where crowd-testing comes in!  There are testing companies that specialize in using testers from many countries, who are using devices with their local carriers.  They can test your application in their own time zone on a device in their own language.  Popular global test companies include Testlio and Global App Testing.  Another good resource is uTest, which matches up independent testers with companies who are looking for testing on specific devices in specific countries.  

With a mobile device portfolio, a mobile test plan, a device farm, and a crowd-testing service in place, you will be able to to execute a comprehensive suite of tests on your application and ensure a great user experience worldwide.  But all of this manual testing takes a lot of time!  Next week, we'll discuss how to save time and maximize your mobile test coverage through automated mobile testing.  

New Blog Location!

I've moved!  I've really enjoyed using Blogger for my blog, but it didn't integrate with my website in the way I wanted.  So I&#...