Quality assurance is essential to meeting marketing goals when working on digital products. This is particularly true when a website moves from the exciting redesign process to ongoing support and feature development. In this article, we’ll discuss what QA looks like in a redesign process and what it looks like afterward.
Robust testing is critical to maintaining positive brand perception and meeting marketing objectives. Errors raise questions, and it’s our job to minimize the number of questions and concerns a prospect has throughout their buying Journey.
At COLAB, we’ve invested significantly in creating QA processes that support marketers in redesigns and ongoing feature development.
Understanding the Testing and QA Process
Several QA phases occur at different times in a website project. Understanding each part of the process is helpful as they are additive.
- Unit testing
- Integration testing
- System testing
- Acceptance testing
Unit tests occur at a component level. They test the smallest amount of reviewable work, ensuring each small part meets their requirements. These tests are the most important type of test in the QA process. A component cannot pass the rest of the testing process if it does not pass its unit tests.
Here are some examples of units:
- Page hero
- Page content
- Newsletter CTA
- Related articles
An example of a unit test:
- Clicking navigation should take you to a page
- Verify hover state
- Navigation items show a caret when sub-navigation is present
- Subnavigation should appear when parent navigation is hovered
- Navigation turns into a hamburger menu (three bars) when the viewport is <1024px
While a single component may work in isolation, it may not work well when paired with other units. Integration testing tests the interaction between multiple components. With integration testing, you can test specific user tasks or activities. Integration tests occur throughout the website redesign process as parts of pages are completed.
Examples of things we would test at this level:
- Service Page
- Blog Post
- Contact Page
Example integration test:
- Contact Page
- When the form is submitted, a verification message should show at the top of the page just below the page title
- The global call to action should not appear on this page
- Upon exit intent on this page, the chatbot should activate the “Exit Engage” workflow
- When on mobile, the address should show as a clickable link above the contact form
Once we have completed developing the required components and assembled them into their integrated configurations, we can test the entire website. This testing occurs toward the end of a website redesign.
Examples of system tests:
- Sitewide visual inspection
- Sitewide accessibility testing
- Core Web Vitals testing
- Load testing
User Acceptance Testing (UAT)
Acceptance testing is the most exciting part of the testing process. This is where we put the finished (or nearly finished) product in front of you. While you would have seen portions of the site and provided feedback throughout the process, this would typically be the first time you can experience the website as a whole. More on this later!
Examples of acceptance testing:
- Internal testing
- Closed beta testing
- Open beta testing
Preparing for Testing
To effectively run the testing process, you must prepare in advance. For a more sophisticated QA process to work effectively, there are a few steps that the QA team follows:
- Identify which tests are necessary
- Create testing plans
- Define test objectives
- Create test scenarios
- Track testing data
While the entire product team contributes to each of these, a dedicated QA person would ideally own all testing activities and deliverables, ensuring this vital preparation happens.
Types of Testing
We’ve mentioned different types of testing throughout the process. Below is more detail on each type of testing that occurs throughout the project lifecycle.
Designers review the work to verify that the design system works with live layouts and content. The design system should hold up to numerous permutations of components. However, having a system is only as good as its execution. The developed work should be well considered, respecting the designer’s intent.
Functional testing ensures that the developed work satisfies the documented requirements. These tests mimic realistic use cases we might see when the site is live, answering the question: “Does the site work as expected?”
We aim for a consistent experience between browsers and devices. Technically, this is functional testing, but we like to call it out separately because it is a constant issue on the web. We pay special attention to frontend-specific functionality as these are the likely features to have problems between browsers and devices. We typically test the latest or most adopted version of popular browsers (Chrome, Edge, Firefox, Safari) on desktop and mobile devices.
Our accessibility testing uses automation to scan the developed site against WCAG 2.1, Level AA standards, making the site usable for people with disabilities. Meeting these standards ensures that they have a positive user experience. We use a combination of automated and manual testing to test and remediate any issues.
Our performance testing leans on Google’s Core Web Vitals measurement tool to identify issues and opportunities for improvement. Scanning every page on the site is unrealistic, so we typically select several key pages: highly trafficked brand marketing pages like the Homepage or About Us page and likely visitor entrances like content marketing pages (blog articles) and advertising landing pages. We target a score of 90 on these pages as this represents an excellent user experience.
Just before we launch a site, we do load testing. We use traffic metrics to extrapolate how many visitors will hit the website at a time. We simulate that anticipated website traffic to identify application code and performance issues. Our measurements approximate how many simultaneous connections are possible before the user experience is affected.
We conduct usability testing for clients who want to test assumptions and hypotheses determined in a website redesign. This testing helps evaluate the website’s user-friendliness and overall user experience. The testing method depends on the theory we’re testing. For example, a straightforward test like tree testing allows us to test how easy the navigation is to use.
We use a combination of SAAS and open-source products to test the security of our work. This testing is even more significant for web applications as much more of the code base has been written from scratch. Our toolset allows us to understand potential vulnerabilities and risks to website security.
As mentioned previously, preparing for testing is critical to executing testing. Assuming your test plans are complete, testing can begin.
Developers help get it started by setting up an environment for testing. This step is essential because QA tests both the front and back end, which may involve manual or automated input of test data, potentially polluting clean data prepped for production. Test data is frequently used in web application testing, as many of these systems will be responsible for processing data from another source.
From here, QA analysts can execute their testing plans within their test management tool, reporting feedback in a bug tracker to stage issues for remediation.
Reporting and Tracking Issues
Reporting and tracking website bugs should be streamlined to assist in reporting and resolution. We’ve done this many different ways, but the best way we’ve found uses a tracker that provides an interface directly on the website we’re working on. This tool exists as a “report a bug” or “feedback” button that shows on every page of the site we’re testing (front end and CMS).
When the bug tracker is activated, it takes a screenshot of the page the user was looking at, allowing them to annotate the screenshot or provide additional written feedback. It captures the user’s information when triggered. It submits the bug report directly into our project management software, automatically capturing details like browser, screen resolution, browser errors, and even a screen recording from a few seconds before submission.
This documentation lets us understand how to replicate the issue and determine a solution. Once we end the testing period, we can prioritize the most impactful bugs and devise a remediation plan.
User Acceptance Testing (UAT)
We will move into UAT once our internal testing is completed. As mentioned previously, UAT is when you experience the work as a whole. By this time, the website is almost ready to be deployed. We help our clients understand this process and how to submit feedback using the same bug-tracking software we use internally (it’s very intuitive). This step helps us ensure that you are satisfied with the work and that we are indeed catching everything that would prevent release to the public.
There are a few ways you can conduct UAT:
- Internal Testing: The test link is distributed to marketing team members. Each user identifies issues within their scope of responsibility via embedded feedback tools.
- Closed Beta Testing: The test link is distributed to brand champions requesting their impressions on the new experience.
- Public/Open Beta Testing: The site is deployed to a percentage of users, allowing the team to study actual end-user behavior and identify potential risks with a release to all users.
After launching a site to the public, we typically move into a continuous improvement cycle. Improvements are great for end users but can pose a risk if appropriate testing measures are not in place. Regression testing is when we correct bugs, learn from real users, enhance features, and add new features to the site.
Due to the interconnectedness of the code that powers a website, an update to code in one location may break a feature in another area. These errors are called “regressions.”However, manually testing every part of the site on each deployment is time-intensive and rarely cost-effective. We use tools to automate regression testing to achieve higher surface area.
Learn more about how we support our work.
Visual Regression Testing
A common and historically complex issue to catch is a minor visual change. These small changes add up over time, slowly degrading the user experience and reducing the integrity of the site design. To catch these quickly and easily, we use a visual regression tool that compares what the website looks like now with what it will look like after deployment. It can compare many pages on a site and what the website looks like on desktop and mobile. The offending pages are flagged for manual review if there are any visual discrepancies. This testing allows us to catch and identify unintended changes to the site’s visuals.
Automated Behavioral Testing
To ensure that no mission-critical functionality stops work, we document everyday user interactions in our automated testing and recreate these interactions programmatically. Submitting a lead form, for example, would be a key feature on a marketing site. We can simulate the user entering information into each field and then clicking on the submit button through automation. If the submission is successful (it does what we expect it to), the test will pass. If it fails, our systems will inform us that we must address a regression. Automation can run in sync with deployments or regularly.
The role of quality assurance and testing on a website is often underappreciated. As we have documented in this article, it can play a vital role in a website redesign, a web application, or ongoing website support.
We hope this helps to provide a snapshot of quality assurance at COLAB and the testing processes we use in our website development work. We see good QA as a must-have for delivering quality work and maintaining that quality over time.
If you have concerns about your existing QA process or want to discuss the nuances of what we’ve described, we’re happy to! Feel free to give us a call or send us an email.