Application Lifecycle Management: What Makes a Good Requirement? – Part 2
If you are a systems or software engineer tasked with writing or reviewing a requirements specification, knowing what makes a requirement a “good” one is extremely important. After 30 years working in system and software testing, I know that having access to a requirements document is rare when testing a new system, and having a useful, prescriptive one is rarer still. So I wanted to apply my own experience to help others create requirements specifications that deliver the intended result – higher quality software – as efficiently as possible.
From my perspective, a good requirement is a testable requirement. As you draft the requirement, ask yourself “How would I test this requirement to know that it was satisfied?” From this one qualifying condition, a number of other dimensions of requirement quality will flow.
In this blog series, I am examining these dimensions of requirements quality – Part I discussing Unambiguity can be found HERE – and this installment will focus on:
Here we’re looking at Atomicity from a database perspective, referring to something that is indivisible and irreducible.
Given how we structure our daily communications, it’s natural to write requirements that are actual multiple requirements in one sentence. For example, you might think it is better and more efficient to collect related conditions together into a single requirement, such as:
- Cruise control shall disengage automatically if the brake pedal is pressed, or the vehicle speed reaches zero
This multiple requirements is going to take two tests to verify it: One to check if the cruise control disengages when the brake pedal is pressed, and a second test to determine if it disengages when the vehicle stops. As it is worded, failing either of these conditions would fail the entire requirement – there is no partial credit. But it won’t be clear what actually caused the failure without digging into test records.
This kind of multiple requirements can cause even more issues from an implementation perspective. Two different teams might need to work on this requirement, one for the brake detection unit and the other for the speed detection unit. In this situation, the requirement can only be marked as implemented when both teams have completed work, which is unnecessarily difficult to track.
Forget what you learned in English class
So, although requirements writing may seem repetitive and go against everything you may have been taught in English class, it’s much better to have two atomic requirements that are worded like this:
- Cruise control shall disengage automatically if the brake pedal is pressed
- Cruise control shall disengage automatically if the vehicle speed reaches zero
Now each of these requirements can be separately tested, and thus pass or fail independently without impacting the other or the teams that support them.
It is also possible to construct requirements that have multiple outcomes, and this is just as problematic. For example:
- After thirty seconds without user interaction, the input data or current workspace shall be saved
Here you are left not only having to test two different outcomes in order to verify the requirement, but it is also not stated which condition should apply. You’ll need to both split and clarify these requirements like so:
- After thirty seconds without user interaction, if there is pending input data, it shall be saved
- After thirty seconds without user interaction, if the current workspace has been updated, it shall be saved
How to avoid combining requirements
How can you tell if you’re writing a requirement that’s actually two (or more) requirements in one? An easy indication is the use of conjunction words such as:
Looking for these words as you draft your requirements is a good test of your requirements atomicity.
In the next post, I will be looking at another aspect of a good requirement, Precision.
If you want to learn more check out our Requirements Management Webinar Series.
About the Author
Ian Compton is a Solutions Architect for the pre-sales team at Persistent Systems. Ian has worked with requirements management for twenty years, starting at QSS with the DOORS V4 release, and via acquisitions on to system testing IBM’s lifecycle solutions through their various iterations.