Reading Time: 3 minutes

Testing. That’s easy, right? Just load up the site that you have just finished building and make sure it looks OK before pushing it live to the client… Sound familiar? If so, you should read this post!

At Pragmatic, we’re constantly evolving and improving our testing process. We have a team dedicated to the task, and Lee, our QA Manager, has shared an overview of the testing process we undertake.

Test Plans

Having a quick play with a newly built website isn’t an effective way of testing it. Sure, you’ll probably notice some issues (especially if you look at it on tablet/mobile), but will you know if the design is correct? Is all the promised functionality delivered? Does the eCommerce journey work? My guess is no.

Hence why you need a test plan. Now, this doesn’t need to be pages and page long, with step-by-step instructions on how you do every single action. A test plan for me is a structured document that captures the high-level requirements to test. For example: “verify the error handling on the login form against Page 6 of the UX specification.” Now I know page 6 of this doc details the error handling, so I have my EXPECTED results to test against, and so I don’t need to write them in my test plan.

By building a test plan against the documentation we have created during our earlier project phases, such as the discovery and definition phase, I know that I will have a document that has tests (the steps), expected results (the project documentation) and actual result (my test execution results).

Device Matrix

In the real world, there are many devices, screen sizes, resolutions, operating systems, browsers etc. So you’re never going to be able to test every combination (not if you actually want a site to ever go live anyway!). So how do you deal with that? I have a couple of options I like to use:

    1. If we’re rebuilding an existing site and there is analytics for that current site, I like to spend a bit of time reviewing the current site’s metrics. From this review, I can get a good idea on the top 3-5 most popular Device/Browser/Operating system combinations visiting the site. This will form my “MUST TEST” list. If the client wants more and has the budget, then we’ll go further back and get the top 8-10 list and add those combinations to the device list.
    2. For a brand new site, I won’t have the historical analytics, so I will use the Device Matrix that we have built in house.  This is a document that defines the most common devices we see using sites (iPhone 7/iOS 10.3.2//Latest Chrome version, for example).  The matrix is divided into 3 Levels; Level 1  highest use/most popular devices, down to Level 3, which has older devices and smaller less used screen sizes. Blackberry anyone…? As standard, we include Level 1 in all sites we build, but again, if a client want’s a wider test effort, we can also add on Levels 2 and 3.

UAT Handover

Once we’ve completed our testing, I feel it’s vital that customers perform their own testing. They need to know that we’ve delivered what we said we would. Also beneficial is to actually use the finished site to see if the design and flow that they asked for is really what they want, or if perhaps they need some changes (now’s the best time before it goes live). To make this process easier, we build a UAT handover pack. This document contains:

  • List of any known open issues (these will have already been discussed with the client by our superb project managers before UAT, but worth having in a list, so they aren’t raised again)
  • A guide on how to perform UAT (you may have never done this before)
  • The timelines agreed for this phase and what happens if any bugs/issues are found during UAT
  • A guide to our bug raising tool Usersnap (this allows quick and easy capture of any issues, with screenshots, directly into our issue tracking system)
  • UAT sign off process and next steps.

So, why do we test?

By having the above process, it allows us to deliver a high-quality product to our customers, which is the highest priority we have here at Pragmatic. And that, in my opinion, is why we test at Pragmatic.