[Live Webinar] Next-Level O11y: Why Every DevOps Team Needs a RUM Strategy Register today!

QA Activities– What Should You Keep In Mind?

  • Coralogix
  • December 21, 2021
Share article

When your development team is under pressure to keep releasing new functionality in order to stay ahead of the competition, the time spent on quality assurance (QA) activities can feel like one overhead that you could do without. After all, with automated CI/CD pipelines enabling multiple deployments per day, you can get a fix out pretty quickly if something does go wrong – so why invest the time in testing before release?

The reality is that scrimping on software testing is a false economy. By not taking steps to assure the quality of your code before you release it, you’re leaving it up to your users to find and report bugs. At best this will create the impression that your product isn’t reliable and will damage your reputation. 

At worst, data losses, security breaches, or regulatory non-compliance could result in serious financial consequences for your business. And if that wasn’t sufficient reason to invest in QA activities, the sheer complexity of most software means that – unless you test your changes – there’s a good chance that for each bug you fix you’ll introduce at least one more, leaving you with less time to focus on delivering new functionality, rather than more.

So how can you make software testing both efficient and effective? The key is to build quality from the start. Rather than leaving your QA activities until the product is complete and ready to release, continuously testing changes as you go means you can fix bugs as soon as they are introduced. That will save you time in the long run, as you avoid building more functionality on top of bad code only to unpick it later when you address the root cause.

However, testing more frequently is not a realistic proposition if you’re doing all your testing manually. Not only is manual testing time-consuming – for many types of testing, it’s also not a good use of your or your colleague’s time. That’s why high-performing software development teams invest in automated testing as part of their CI/CD pipeline, combined with manual testing in situations where it adds the most value.

Automatable QA activities

Automating your tests means you can get feedback on your changes just by running a script, which makes it feasible to test your changes far more regularly. Just as with manual testing, for automated testing to be effective, you need to cover multiple types of tests, from fine-grained unit tests to high-level functionality checks and UI tests. Let’s explore the types of testing that lend themselves well to automation, starting with the simplest.

Static analysis

Static analysis is one of the easiest forms of automated QA activities to introduce. Static analysis tools check your source code for known errors and vulnerabilities and you can run them from your IDE. While they can’t catch every error, they give you a basic level of assurance.

Unit tests

Unit tests usually form the lowest level of testing as they exercise the smallest units of functionality. Developers typically write unit tests as they code and many teams use code coverage metrics as an indication of the health of their codebase. Because unit tests are very small, they are quick to run and it’s common for developers to run the test suite locally before committing their changes.

Integration and contract tests

Integration tests verify the interactions between different pieces of functionality within the system, while contract tests provide validation for external dependencies. While these tests can be executed manually as a form of white-box testing, scripting them ensures they run consistently each time, meaning you can run them as often as you want and while also providing faster feedback.

Functional and end-to-end testing

Functional or end-to-end tests exercise your whole application by emulating user workflows. These types of tests tend to be more complex to write and maintain, particularly if they are driven through the UI, as they can be affected by any changes to the system.

Functional tests can be run both as smoke tests and as regression tests. Smoke testing refers to a subset of tests that are run early on in the CI/CD process to confirm that the core functionality still works as expected. If any of those tests fail, there is little point in checking the finer details as issues will have to be fixed and a new build created before any changes can be released.

By contrast, regression testing refers to a more comprehensive set of tests designed to ensure that none of the existing functionality from the previous release has been lost. Depending on the degree of automated testing you already have and the maturity of your CI/CD pipeline, you might choose to automate only smoke tests at first (as these will be run most frequently) and add a full suite of regression tests later once you have more time.

Browser, device, and platform testing

For many types of software, environment testing forms an important aspect of functional testing. It involves verifying that your application’s behavior is consistent and as expected when running on different devices, operating systems, and browsers (as applicable). 

While the level of repetition involved makes these tests a good candidate for automation, maintaining functional tests across a range of environments can itself be expensive, so it’s a good idea to prioritize based on the most commonly used environments.

Building an automated testing pipeline

While the above types of testing can all be automated, that does not mean they should be treated equally. When you’re building up your automated testing capability, frameworks such as the test pyramid provide a useful way to plan and structure your tests for maximum efficiency. Starting at the bottom of the pyramid with low level unit tests that are quick to run provides the largest return on investment as well-designed unit tests can identify a lot of issues.

When prioritizing integration, contract, and functional tests, focus first on areas that pose the highest risk and/or will save the most time. If you’re transitioning from manual to automated tests, your existing test plans will provide a good starting point for identifying the resource commitment and time required for each testing activity.

Manual software testing

Given the range of software testing activities that can be automated, where does that leave manual testing? Automated testing makes use of machines to perform repetitive tasks – if you’re able to define in advance what needs to happen and what the result should be, and you need to run the test more than once, then it’s far more efficient to write a script than do it by hand.

Conversely, if you don’t know exactly what you’re looking for or it’s a new part of the system, then this is a good time for your QA team members to demonstrate their ingenuity. Exploratory testing of new functionality and acceptance testing for new features are not tasks that can or should be automated. While acceptance testing benefits from nuance and discretion in determining whether the requirements have been met, exploratory testing relies on human imagination to find unexpected uses that stretch the system to its limits.

When you do discover a bug, documenting the issue with details of the platform (device, OS, and browser as applicable), the steps to reproduce it and supporting screenshots, the expected behavior, and the actual output ensures that whoever picks up the ticket has all the information they need to fix it. 

Adding an assessment of the severity of impact and frequency of occurrence will also help your team prioritize the ticket. Findings from any manual testing should also feed into your automated tests so that your test suite becomes more robust over time.

Managing quality in production

If you’re automating what you can and using manual testing effectively, then quality assurance is an investment that pays dividends. But with so many QA activities taking place before your code is released, you may be wondering what’s left to go wrong in production. Unfortunately, the sheer complexity of software and the many variables – from device and platform to user interactions and variations in data – mean it’s impossible to test every possible eventuality.

The good news is that you can mitigate the damage caused by issues in production by building observability into your system and proactively monitoring the data your system generates. By maintaining an accurate picture of normal operations, you can identify issues as they emerge and take steps to contain the impact or release a fix quickly.

Where Modern Observability
and Financial Savvy Meet.

Live Webinar
Next-Level O11y: Why Every DevOps Team Needs a RUM Strategy
April 30th at 12pm ET | 6pm CET
Save my Seat