If you are a systems or software engineer tasked with writing or reviewing a requirements specification, knowing what makes a requirement a “good” one is extremely important. After 30 years working in system and software testing, I know that having access to a requirements document is rare when testing a new system, and having a useful, prescriptive one is rarer still. So I wanted to apply my own experience to help others create requirements specifications that deliver the intended result – higher quality software – as efficiently as possible.

From my perspective, a good requirement is a testable requirement. As you draft the requirement, ask yourself “How would I test this requirement to know that it was satisfied?” From this one qualifying condition, a number of other dimensions of requirement quality will flow.

In this blog series, I am examining these dimensions of requirements quality –

Part I discussing Unambiguity can be found HERE

Part 2 discussing Atomicity can be found HERE

Part 3 discussing Precision can be found HERE

– and this fourth installment in the series focuses on another critical dimension:



Verifiability is closely aligned with some of the other aspects of good requirements discussed in previous posts, yet still merits its own consideration. To be verifiable, a requirement must be unambiguous and precise. But even if it meets both these criteria, it may still not be verifiable. It may not be possible to verify the requirement in a timely or cost-effective manner, or it may not be possible at all!

This situation occurs more frequently when defining non-functional requirements. One of the most common – and notorious – examples is often worded in this manner:

  • The system when shipped shall have zero defects

This requirement is both unambiguous and incredibly precise, but it is also impossible to measure and unattainable, which makes it impossible to verify.

Thankfully, there are other more realistic process metrics that can be substituted in its place, drawing from Six Sigma and other quality methodologies. For example:

  • The system when shipped shall have less than 3.4 defects per million opportunities

Is this verifiable? Yes. But is feasible to verify? That’s debatable given the quantities. A better option might be:

  • The system when shipped shall have no more than four defects per 10 KLOC, as estimated by the Monte Carlo seeding technique (see Appendix D)

Availability requirements can also be expressed precisely, for example:

  • The system shall meet or exceed 99.99% uptime in its first year of operation

In this case, the requirement is unambiguous, clear, precise and verifiable. From a testing perspective, this is a proper requirement. But software engineers don’t operate in a vacuum; our efforts and activities are rooted in business metrics and expected outcomes. So in each case, the requirements writer needs to ask an important question: is the level of availability being demanded by a requirement justified for the product in question?

There is no sure way to verify this requirement without running the system for a year, which is unlikely to be an acceptable requirement for the business’s needs. So as you are developing or reviewing a requirement, be sure to highlight the expected resource requirements needed to verify it as well, so stakeholders signing off on the requirement are aware of the implications.

In the next post, I will be looking at another aspect of a good requirement, Independence.

About the Author

Ian Compton is a Solutions Architect for the pre-sales team at Persistent Systems. Ian has worked with requirements management for twenty years, starting at QSS with the DOORS V4 release, and via acquisitions on to system testing IBM’s lifecycle solutions through their various iterations.