Back to Resources

Blog

Posted July 25, 2017

Organizational Best Practices for Testing Success

quote

Implementation of modern development practices has as much to do with how teams operate as how they leverage tooling, if not more. Organizations that have created barriers will never fully leverage automation to improve how they build and deploy applications. Even in the most modern development environments, testing is the part of the delivery chain that is harmed the most. In this post, I summarize the results of interviews with 30+ companies, highlighting aspects of organizations that are all stars in testing.

Organizational Structure

It’s likely you and your team have little impact over how development groups are organized. Organizational change is always top-down, and it’s hard to know how impactful the executors can be at driving this change. So for this post, let's assume everyone in the team can change structure on a dime. In both large and small organizations, this is what I’ve seen as the most successful:

  • Two-pizza teams: All team structures are two-pizza teams, which means each application team or DevOps team is no larger than seven people. This includes cross-functional involvement (which will be clarified below). These smaller units introduce fewer distraction variables, and are able to focus on results for bite-size aspects of the application. This also indirectly impacts application architecture because a monolithic, waterfall-developed application will not support two-pizza teams. Your application does not need to be microservices-based. But there needs to be enough segmentation that teams’ units of work are discrete.

  • Shift-left testing: Developers are accountable for testing their own applications. They do this in continuous integration (CI) environments at the cadence of their choosing, but the ideal is upon every commit. By the time the teams’ code makes it to systems testing or delivery, it should be green, with all the tests assigned to that team.

  • Quality engineering (QE) is a steward: The above does not mean that the developers choose what to test. The quality engineer(s) decide what to test and how. But they do it by being stewards of the testing process. They provide each two-pizza team the methodology, test suite, and technology to run the tests as they code. If in shift-left, developers are spending the time building the test strategy, then it defeats the purpose of having them build functionality faster. This means that the quality engineer is a part of teams on a regular basis.

Talent

In the most successful testing organizations, everyone is an engineer, which means everyone can write code at some level, even if just scripting. This is critical, and for organizations accustomed to manual testing, this can be a challenging transition. Often, there is an opportunity to train manual testers to move into strategy—but often there isn’t. In such cases, organizations will tend to start building the proper structure with new applications, and phase out old approaches over time. Without being technical, this makes the open communication required for agile environments difficult. And it does not support the idea that quality engineering is a steward, which means the organization needs to know how to create automation for the rest of the team.

With the above definition of two-pizza teams, I did not isolate it to a function set—meaning not all developers and not all QA. The reason for this is that while most two-pizza teams will be development teams focused on a specific subset of a broader application, there is also the services layer driving the delivery chain that consists of DevOps/ITOps/SRE individual(s) and quality engineering. (Let's call it “DevOps+QA.”)

It is very important that those implementing the delivery chain, monitoring production, and providing infrastructure join forces with QA. Together, they are the best suited to vet and choose what automation should be used for the delivery chain, and how it should be implemented. They are also the best suited to identify chatbots and monitoring tools that can benefit everyone. They will have a holistic point of view of multiple application components, and will be able to see the entire delivery chain as a holistic unit across all two-pizza teams.

This is in a large contrast to older team structures where QA was the furthest removed from ITOps, and only at the very end of the delivery chain. A DevOps+QA unit will be facilitating 3-4 application teams—or in smaller organizations, the entire environment.

Workflow

The software delivery chain should be treated as its own application. On a continuous basis, successes and failures (bottlenecks) of the delivery chain should be identified and improved. It is DevOps+QA that owns the delivery chain application, and is responsible for making sure it’s as efficient as possible—which includes technology, communication and processes.

The entire organization should be driving toward continuous delivery, where all code deployments and testing are 100% automated up to deployment. Between delivery and deployment, there is a final gate that is still automated, but manually initiated, often by the QE. Continuous deployment is useful for some organizations, but not all.

Technology

Automation does not happen without great tools, and an integrated delivery chain. It’s the integration that is critical. Tools have to integrate with each other. For example, the functional testing tool needs to integrate with the release automation tool. But these also need to integrate into the team (for example, out-of-the-box chatbots that support transparency across the entire organization).

The tool landscape for such tooling is impressive and robust. The types of delivery chains I have witnessed are incredible. (Picture a seamless chain of 6+ tools from backlog to production that can work together.) A few organizations I’ve encountered are assimilating new, potentially beneficial tools at a rapid pace. They are able to bring in a new tool, test it, and validate it without causing any disruption, and when successful, they increase the value of the delivery chain quickly.

DevOps environments are living. They change on a regular basis. If they didn’t, they would not be able to support the continuous evolution of how applications are built, tested, and delivered. I’ve seen so many organizations fall into the trap of feeling that the tools will be what gets them to the next step. They fail to realize that their organizational structure has a huge impact on how to build better applications. But the organizations that do realize this have been able to execute on their goals much faster, and build a sustainable, fast-moving environment.

Chris Riley (@HoardingInfo) is a technologist who has spent 12 years helping organizations transition from traditional development practices to a modern set of culture, processes and tooling. In addition to being a research analyst, he is an O’Reilly author, regular speaker, and subject matter expert in the areas of DevOps strategy and culture. Chris believes the biggest challenges faced in the tech market are not tools, but rather people and planning.

Published:
Jul 25, 2017
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.