Back to Resources

Blog

Posted March 10, 2020

How QA Teams Can Use Software Monitoring Tools

quote

If you work in QA, you're probably accustomed to thinking of software monitoring as someone else's job. Traditionally, responsibility for monitoring applications fell to IT teams; QA's role ended with pre-deployment testing, and QA engineers did not usually touch monitoring tools.

But the reality is that monitoring tools—meaning tools designed to help track application availability and performance, and also alert teams to problems—aren't just for IT teams. They can also help QA engineers do their jobs more effectively.

Here's a look at how monitoring tools like Prometheus, Sumo Logic, and Splunk can help QA, as well as the challenges QA teams should be aware of when working with monitoring tools and the data they produce.

How monitoring tools help with software QA

Even though QA engineers are used to working primarily with software testing tools rather than monitoring tools, the latter can nonetheless help QA to do their jobs better in several ways.

Tracking QA's impact on application quality

Perhaps the most obvious is the role that monitoring tools can play in helping to track the impact that QA has on application quality.

Without monitoring tools and data, software delivery teams are shooting in the dark—or at least the twilight —when it comes to determining the relationship between what the QA team is doing and the quality of production applications. There is no systematic way to know how the introduction of a new testing routine or the migration of a test suite from manual to automatic impacts the reliability, usability, or performance of the application. You might be able to infer some relationships between QA processes and application quality, but those inferences will be subjective and ad hoc.

When the QA team systematically monitors the application in production instead of just during testing, however, it becomes much easier to draw relationships between QA processes and application quality. Whenever QA changes something, monitoring data can be used to interpret the impact of the change.

This adds up to a more effective way of planning QA operations and determining what is working and what isn't.

Establish universal KPIs

One common challenge in DevOps delivery pipelines is establishing metrics that can be used to track code quality at all stages of the pipeline. When developers use one set of tools to track progress, while QA uses another, and IT uses still another, it's impossible to construct a single body of data that reflects application quality across the pipeline.

By using monitoring tools, however, QA teams can help solve this problem. They can work with IT engineers to establish a common set of KPIs that everyone will use to measure application health and performance. QA can then write tests that focus on evaluating those KPIs before an application is released, while IT tracks the same KPIs post-deployment. Developers can also participate by focusing on the KPIs when they write new code.

Justify investment in QA

Using monitoring tools to track QA's impact on application quality and establish universal KPIs supports the goal of demonstrating the value of QA. In a world where developers and IT Ops engineers (the two nominal ingredients in the DevOps formula) get all the love, there is a constant pressure at many organizations to justify investment in QA. Monitoring tools can play a large role in helping to do that.

The perils of monitoring tools for QA

It's worth noting that there is a flipside to the benefits described above. When disconnected from the QA process, monitoring tools can also be used to make a case against QA.

The argument here is simple (and familiar to some QA engineers): when your monitoring tools are sophisticated enough, you don't need QA at all. This is an increasingly common talking point, especially among modern APM vendors who promise that their monitoring tools can enable such intelligent monitoring and fast resolution of issues that having QA teams vet code thoroughly before release isn't even necessary.

While this may make monitoring tools seem like a threat to QA, it's also exactly the reason why QA teams need to embrace monitoring tools and make sure that they reinforce rather than replace the traditional QA process. Even the most sophisticated monitoring tools are not a substitute for software testing. APM tools can't fix problems before they are in production. They are also of little use for troubleshooting certain types of issues that QA excels at finding and addressing before deployment, such as usability testing.

Conclusion

The bottom line: although monitoring tools may sometimes seem like a threat to QA, they're an increasingly important complementary resource for QA engineers. With the help of monitoring tools, QA engineers can make the overall importance of QA clearer, tie the QA process more directly to application quality in production, and collaborate more seamlessly with developers and the IT team.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

Published:
Mar 10, 2020
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.