| |

Conditions for Relevant Changes to Requirements Models

Let’s say that there was a requirements model M1, we made a change to it, and we call the changed model M2. What can be said about the relationship between the contents of M1 and M2?

To make this simpler to discuss, suppose that we changed M1 by refining a requirement in it. To further simplify, assume that M1 included only a single requirement R, and that we refined R with R1, R2, and (R1 and R2 imply R). 

Following the definition of refinement (see [1] and this text), the relationship between R, R1, and R2 satisfies the following conditions:

  1. R is the logical consequence of {R1, R2, (R1 and R2 imply R)};
  2. R is neither the logical consequence of R1 only, nor of R2 only;
  3. R1 and R2 do not contradict each other (i.e., are logically consistent);
  4. R is not refined by a single requirement, but by more than one requirement – two in this case, R1 and R2.

Because I assume that the conditions above verify, I should also assume that M1 is a set of formulas in classical logic. It follows that {R, R1, R2, (R1 and R2 imply R)} is a set of propositions that I choose to call requirements here. 

In terms of change, let’s simply recall what happened between M1 and M2. In M1 we only had R. We made M2 by refining R. M2 includes:

  • R,
  • R1,
  • R2,
  • R1 and R2 imply R.

An important observation for the question I started this text with, is that the conditions for refinement (see above) verify over propositions in M2. They obviously cannot verify for those in M1, and the conditions for refinement guarantee this, as well as guaranteeing that R1, R2, and (R1 and R2 imply R) are not in the logical closure of propositions in M1. The important implication of the said observation is that information was added to M1 to obtain M2. 

A further interesting way to think about the relationship between M1 and M2 is to see them as snapshots in a process of changing some initial set of propositions, which in our case is M1. 

A way, then, to describe what happened to make M2 from M1, is that some kind of ampliative inference had to be done, to discover, design, or otherwise, whichever way you want to call the act of finding out that R1, R2, and (R1 and R2 imply R) refine R.

Although we know how to define entailment, as well as other relationships over propositions in M1, M2, and those that would follow from further changes, I’m not aware of work in requirements engineering research that looked at defining properties of good ampliative inference steps that lead from one iteration of the model to another, i.e., from M1 to M2.

Depending on which requirements modeling language you pick, each will restrict what you can do to models you make. For example, refinement is an allowed operation on a KAOS goal model, but decomposition isn’t defined in the KAOS language, and therefore, even if you intuitively do decomposition (see this text and [2]), the relationship between the requirements (goals in KAOS) you decomposed will be that of goal refinement.

Refinement is actually an easy case: if the work to do in M1 to produce M2 is to refine the requirement R in M1, the definition of refinement as an operation or process, and the definition of conditions that should hold in M1, is informative about what we are looking to accomplish through the ampliative inference step from M1 to M2. Specifically, we need to add detail to R, rather than, say, replace it, or find a requirement that is more abstract / less detailed than R. This admittedly isn’t very precise, but is better than many other cases of ampliative inference.

There are no general rules for making “the right” ampliative inference step on some given input. There are, however, rules for good ampliative inference in more specific settings.

For example, we can say that every change of a requirements model needs to yield an explanation, and we can be demanding about what constitutes a good explanation (see this text). 

In short, then, any change of a requirements model needs to satisfy two conditions:

  1. There needs to be an explanation of the change, whereby the explanation needs to satisfy conditions for being a “good” explanation (see this text for a discussion of these conditions, for explanations in general);
  2. The explanation must be acceptable to the individuals/stakeholders who have been given the right to accept/reject model changes, whereby the explanation is acceptable if and only if there is no argument X that attacks an element of the explanation, and is such that that same group of people finds X acceptable. On acceptability, see [3] and this text for some nuances.

References

  1. Darimont, Robert, and Axel Van Lamsweerde. “Formal refinement patterns for goal-driven requirements elaboration.” ACM SIGSOFT Software Engineering Notes 21.6 (1996): 179-190.
  2. Van Lamsweerde, Axel. “Goal-oriented requirements engineering: A guided tour.” Proceedings fifth ieee international symposium on requirements engineering. IEEE, 2001.
  3. Dung, Phan Minh. “On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.” Artificial intelligence 77.2 (1995): 321-357.

Similar Posts