Requirements and Change

Richard Brooksby, Ravenbrook Limited, 2003-02-24


Software development is all about increasing the value of our products to customers. The more direct the connection between customer requirements and the changes made by software engineers, the more likely that those changes will improve the product and increase its value for the customers, and therefore the organization. I describe a simple model of requirements engineering and a method of managing change to software which focusses directly on those requirements.




1. Introduction

In this paper I'll tell you about the best methods I've found for managing change to a software product. The method I describe covers many of the key practices of level 2 of the Capability Maturity Model for Software Engineering [SEI 2002-08a]. It's a requirements-driven process, not specialized for any particular programming method.

The method was originally developed in small software organizations, with about 20 developers, but scaled up versions have now been applied by the author in organizations with over 500 developers.

This paper is based on Product Quality through Change Management [RB 1999-05-20], but with more focus on requirements and their relationship to change. It was written for presentation at the British Computer Society Configuration Management Specialist Group conference "Implementing CM Everywhere: Change, Configuration and Content Management", to be held 2003-07-24/25 at Homerton College, Cambridge, UK.

2. The goal

The goal of development is to increase the value of the product. The value is measured by the customers in the product's market. If your product is valuable enough to them they will pay you part of that value.

Developing a new product is the same as developing an existing product; a new product is non-existent at first and so starts with a value of zero.

But our customers keep changing their minds about what's valuable. Software projects are faced with continually and rapidly changing requirements. A valuable product must still meet those requirements, and it follows that a process that's going to produce a valuable product must track them carefully.

3. How to deliver value

The most critical thing a software organization must do is understand what is valuable to its customers. Without this understanding what hope is there of producing anything good? And yet understanding customer needs is frequently neglected in software organizations.

A software organization must maintain a model of what is valuable to its customers. This model is called the requirements, and I'll describe them in more detail in section 4, "Knowing what is valuable".

The requirements can be used to assess the value of a design or implementation.

The problem with the requirements is that they are always changing, and the list of things customers want always seems to grow. So the requirements can never be fully met. If we try to meet them all in one long delivery cycle we're bound to deliver something that is not wanted, or even deliver nothing at all until it's too late.

Attempting to meet all the requirements at once is called big bang delivery [Gilb 1988, §1.8]. The software engineering organization goes into hyperspace and hopes to come out somewhere near the desired result (see figure 1). Nothing works until the whole thing is done. A wasteful "code freeze" is needed to put things back together and "get it working". Furthermore, the product goes untried until near the end of the cycle, so that errors in requirements, design, or coding are not discovered until it's too late to do anything about them. It's an extremely risky strategy, usually leading to unreliable software with a long time to market.

Figure 1. Big bang delivery

Errors in requirements are the most deadly. If your organization doesn't understand what the customer wants then it doesn't matter how well designed your product is, or how well the code works.

A good rule of thumb is that a requirements error costs ten times as much as a design error, which costs ten times as much as a coding error.

What is apparently an "error" in requirements could just be a requirement that has changed. The longer it takes to deliver, the more likely it is that requirements will have changed, and these changes erode the value of the product — much more than bad design or poor coding.

The solution to this problem is evolutionary delivery [Gilb 1988, §7]. The idea is simple:

  1. Compare the software now with the requirements, which are what we currently think the customer wants. This gives us a "sighting" — a direction to head in.
  2. Take a step in that direction by tackling the most valuable requirement, delivering the maximum value to the customer in the shortest time.
  3. Deliver a complete product (that is, fully packaged, documented, and so on). It doesn't do everything the customer wants, but it's much better than before.
  4. Gather information from the customer, especially reactions to the delivered product. This will modify our idea of what the customer wanted (usually quite a lot).
  5. Iterate.

This is called the evolutionary delivery cycle. Take a sighting, make a step, take another sighting, another step, and so on (see figure 2). We need to keep up with the changing requirements, of course, but we are much more likely to get close to what the customer really wants, even if the customer's ideas change. The shorter we can make the cycles the better for us. I have found a cycle of three months works well for small and large projects alike. There is less effort wasted going in the wrong direction and we deliver a more valuable product.

Figure 2. Evolutionary delivery

Actually, every programmer already knows about this process, because it's just like the "edit-compile-run" cycle. Programmers do some development, try out the result, modify their ideas, then do some more development, and so on. Here, we "edit" the product, "compile" it into a release, and "run" it by the customer.

Software development, like any design activity, is a very variable process [Reinertsen 1997, pp17-18]. It's very hard to predict how long it will take to do something, and it'll pretty much always take a long as you give it, and usually longer. That's why it's important not to schedule your deliveries according to your estimates of how long something will take, but instead to pack what you can do into a fixed delivery schedule. This doesn't eliminate the variability, but it does break the dependency between it and the delivery of a valuable product.

The earlier we can make a release to the customer the better and more accurate our sightings will be, and therefore the higher the quality of the result. The customers are also happy because they have software which meets some of their most important requirements sooner.

Another significant benefit of this technique is that it clears up important delivery issues early. Quite often there are critically important things that get forgotten until the last minute (like the user manual). By delivering a complete product at each stage we discover these much earlier, and last minute panics are averted. Last minute work tends to be defective, and if it's something critical then that's even worse.

The disadvantage is that you have to do more analysis and planning up front, and it seems to take longer to make changes to the software that you "know" are needed. The advantage is that the final result is much closer to what's wanted, so you don't waste time on unnecessary changes or reworking it because you find out you didn't know after all. In my experience the advantages outweigh the disadvantages by a significant margin. The trouble is that you experience the disadvantages first.

4. Knowing what is valuable

The CMM provides an excellent checklist of things your organization should be doing to manage requirements [SEI 2002-08a, pp82-93, pp202-222]. I don't intend to list all the activities here. Instead, I'm going to concentrate on the form and organization of the requirements, which is something the CMM doesn't talk about.

The most important thing is that the organization maintains the requirements, and makes them available to everyone they affect.

Making them available also means making them clear and comprehensible. Everyone working in the software engineering organization must, to some extent, be able to assess their work against the requirements in order to understand how it contributes to the value of the product. But to achieve this the organization must assign enough resources to gathering, analyzing, developing, and maintaining the requirements.

To get a really productive environment, the organization must link all of its development work to the requirements, so that the value justification for any piece of work can be understood and reviewed by anyone [SEI 2002-08a, p88]. Everyone can then understand how their work adds value, and use all of their skills, experience, and brainpower to make the biggest contribution they can [McGregor 1960].

The requirements might be stored in a "living document" or a database system. As long as they're maintainable and accessible.

Requirements can be represented as a table. An simplistic example is shown in figure 3.

Id Metric Level Value Source Type Area
1873 Tea provided for workforce yes $5000 Proposal meeting 2003-02-29 customer vending
1874 Tea frequency 25 cups/h Req 1873
Analysis meeting 2003-03-09
product tea system
1875 Tea System operating hours 08:00/19:00 Req 1873
Analysis meeting 2003-03-09
product tea system
1876 Tea temperature 80°C Req 1873
Analysis meeting 2003-03-09
product tea system
1877 Supply workforce size 50 people Req 1873
Analysis meeting 2003-03-09
product tea system
2034 Water heater output temperature 90°C Tea maker design version 2 derived tea system
Figure 3. Requirements as a table

The existence of a requirement doesn't mean that the organization is committed to meeting the requirement. That's dealt with by specifications and planning (see section 5, "Planning to increase value"). The requirement is there to indicate what's valuable to the customer, and nothing more.

Each requirement has at least the following fields:

There's no need for a separate "description" field. The metric should be enough to completely describe what the requirement is about. Avoid redundancy.

The source is absolutely vital. Without a source it's impossible to check that the requirement is valid, and since requirements are always changing you'll need to check them regularly. You want to be able to make requirements obsolete in order to be able to change your product. Without a source you'll never know when a requirement has gone away.

The value of things changes over time. A valuable feature today won't matter much tomorrow, especially if it is something that differentiates your product from its competition. When the competitors catch up it won't be worth nearly as much. You should model this decay in value so that you can understand the cost of delay [Reinertsen 1997]. This is what it costs you (in lost value) when your product is delayed. Understanding the cost of delay can help you make decisions about buying tools or hiring staff to get your product out earlier.

I also recommend:

In fact, you should consider defining a complete lifecycle for requirements, similar to lifecycles often used for defects in a defect tracking system. If you don't make requirements obsolete your product will always grow larger and more complex, even if the problems its supposed to solve merely change, or become simpler. Eventually, the product will cost more to maintain than the value it generates, and that will be the end of it.

In a large and complex product, the type of requirement becomes important. Some people in the organization will be gathering, analyzing, planning, and designing at quite a high level. They're working in customer terms. These requirements inspire designs, which may produce further requirements for product components, and eventually these requirements are numerous, detailed, technical, and individually of little concern to the customer. Type and area fields allow everyone to see the requirements that are relevant to them.

It's not necessary for the requirements to be complete, provided they cover the things that are important (and therefore valuable) to the customer.

The most important things to define are usually not features, but attributes [Gilb 1988, §9]. In fact, defining features of your product as requirements can be a dangerous trap.

Usually, the customer has a problem that they want to solve, and your product is supposed to help. It's quite rare that the problem is that they can't do something at all, only that they can't do it quickly or efficiently enough. If the software doesn't make the job faster or cheaper then it fails in its purpose. It's therefore vital that you understand the problem and define the top-level, customer requirements in terms of the real problem. Deciding which features might help to solve the problem is a matter of product design, and requirements that result from that are a result of choices you have made and could change in the future if the nature of the problem changes. If you don't understand the problems your product solves then it will become irrelevant and valueless when the problems change.

These critical attributes are usually difficult to measure and control, and so it's tempting to stick to what you can see: the features of your product. You must attempt to understand the critical attributes, even if your metrics involve hand-waving. The results are often enlightening.

Customer requirements are not usually stated in a very measurable way. "It works out of the box" is a typical example. However, that's what the customer wants and so you must include it as a customer requirement. You must analyse it into product requirements: installation time, time to first use, ease of use, experience of first 30 minutes, etc. [Gilb 1988, §9] [SEI 2002-08a, pp208-212]. Then you can improve the design of your product to meet the requirement and be able to measure the value of your implementation. The product requirements should be reviewed with the customer if at all possible. (If there's no specific customer then they should be checked against typical representatives of the market.) It's essential to ensure that the product requirements are a correct interpretation of the customer requirements. And frequently these checks will reveal that the customer requirements aren't accurate either! Product requirements should be traceable to the analysis, records of checks done, and of course the customer requirements from which they are derived.

After customer and product requirements, a third kind of requirement is useful. You will make design decisions which will result in further requirements. Usually these requirements will be about parts or aspects of the product. They're not things the customer wants directly, but are consequences of the decisions you've made. These are derived requirements, and need to be treated just like the other requirements in your system. The "source" of a derived requirement is the design it comes from, and that design should be justified by the other requirements it meets. The important thing is that derived requirements can change as a result of changes in the design, usually without reference to the customer, so it's very important to keep track of what's derived and what's not. You don't want some old internal decision constraining your future ability to meet your real customer requirements.

Figure 4 shows how product and derived requirements are generated from customer requirements, and the traceability relationships between them. Note that the loops between design and derived requirements are intended to indicate the possible many levels of design, and not a literal loop between particular designs and requirements.

Diagram of requirements types
Figure 4. Requirements types

You probably also have predictions about what will be valuable in future, even if the customers aren't asking for it now. Such things are valid "strategic requirements", and should be included, along with their value prediction. This prediction is likely to be volatile, however, and should be reviewed regularly, so that work on implementation stops if they turn out to be duff.

In summary, the requirements describe our best idea of what the customer wants and values, stated in objective and measurable terms. They are the guiding light for everyone in the organization.

5. Planning to increase value

Once we have some requirements, the next step is to look at the current status of the software and compare it to the requirements. If you don't know the current status then it's both urgent and important that you develop ways of measuring how well you're meeting the critical requirements; you might be able to develop tests that do this, but some subjective measurements may also be needed.

You should assess your current status against the requirements even if you don't have any software yet. You might be surprised by the results. Developing a new product isn't a special case: you start with nothing and make changes to turn it into something. There's nothing fundamentally different about those changes and the changes you'll be making later to make your product meet changing requirements.

Assessing the current status should give you an idea of the value of the software to the customers. There are bound to be differences between the status and the requirements. Resources are limited and you can't hope to meet all the requirements at once. Pick out the most valuable requirements that you don't meet at the moment. Also, give high priority to the requirements that you're most uncertain about how to meet, as these are the things to get out of the way sooner rather than later [Gilb 1988, p98, §7.10].

Get these high priority requirements analyzed so that you have some idea of the amount of effort required to meet them. Take every opportunity you can to divide them down into more manageable (smaller) pieces of work, each of which can deliver some value, and put the most valuable pieces first.

The estimation of cost, combined with the estimation of value, gives you a cost benefit analysis.

You are now in a position to plan the next few versions of the product using version planning, a method of project planning [SEI 2002-08a, p94].

A version is a point in the evolution of the specification of the product. A version specification describes the levels at which the requirements metrics are met. A version plan is a set of planned specifications of future versions. It, too, can be represented as a table. Each column is the specification of a version.

Figure 5 shows a version plan, and how it relates to the points of development along a master codeline. The master codeline is evolving towards the requirements in stages, and each stage has a specification which is a column from the version plan.

Figure 5. The version plan

The important thing is to plan these versions to come out at fixed intervals. As mentioned above, I've found that a cycle time of three months works well for small and large projects alike.

The next version of the product should add as much value as possible given the resources available. The version after that should add more, and so on. Keep the next version within easy reach, and make the date of the next version as close as you can.

If there's work that seems too large to fit, or is too large to estimate, break it down. This may mean changing the design of the product, or choosing a longer path to eventual full implementation. It's worth doing that to make sure that the product is delivered and to cope with changes in requirements. More often than not, "full implementation" will be different anyway, because the requirements will change. Breaking down work like this will force you to use flexible and open-ended designs which will allow you to adapt to changing requirements in the future.

The level of detail at which you plan should vary. The next version will need a full specification against all the requirements, though for most requirements there'll be no change. The version after that probably only needs specifying against the product requirements. Further out, the specifications may only include strategic requirements which are projections of future markets.

The same kind of representation can be used for all these kinds of plans. Planning becomes a process of working out the details for a high level specification.

For large and complex products, strategic planning may specify products several years ahead.

Don't forget to expect the requirements to change after you've delivered the next version. Highly detailed planning of the version after that would therefore be wasted effort.

Developing the version plan should also yield design ideas, designs, estimates, project plans, and so on. These are outside the scope of this paper. What I will say is that all such things should be justified in terms of the version plan and the requirements. When the requirements, and therefore the version plan, are changed, this traceability will allow you to adapt quickly and efficiently.

One sign that this is working well is when you stop or scale down projects half way through because their results are no longer valuable. There's no point finishing projects that won't produce any value, just because nobody can quite remember how or why they got started.

The task is now to control change to the software to evolve it towards the next version specification, increasing the value of the product. While you're getting there, gather requirements and plan future cycles, and so on for the lifetime of the product. It's important not to think about starts and finishes, only about continuous evolution and improvement of the product.

The rest of this paper is about achieving that control.

6. Continuous improvement

Figure 2 shows how each development cycle takes the product closer to the customer requirements, increasing its value. Figure 5 shows the same thing in a different way, with the master codeline of the product constantly evolving towards the customer requirements.

The key concept of this change management process is continuous improvement. Always, always approach the requirements. Never, ever allow a change that takes you further away. This means never allowing a change that dismantles a feature, or breaks a piece of code, even "temporarily". The master codeline must always be improving. In practical terms, this means keeping the master codeline ready to build and release to customers at any time, knowing that it will be more valuable to them than before.

Figure 6 is a diagram I like to draw to illustrate this simple idea. Every step must be a step up. Every change must be an improvement. Every baseline must be better than the previous baselines. Isn't this what configuration management is all about?

Figure 6. The improvement staircase

The staircase is a simplification. When requirements change, the definition of value changes, and so the product may well become less valuable. In addition, the value of requirements often decays over time. Sometimes it's all we can do to maintain the value of a product in a changing environment. But the principle of continuous improvement still applies: every change must be an improvement.

This makes it easy to meet delivery dates. You can always deliver. The only variation is in exactly what you deliver, and not in when you deliver it. The software from the master codeline is always ready to release. If you've been careful to focus efforts on the critical requirements, to employ flexible design, and to break the work down into small pieces, then you'll always be increasing the value of the product as much as possible by the time of the next delivery.

Most of the rest of this document explains how to achieve this happy state of affairs.

7. Change

There's only one reason to change the product: because it doesn't meet its requirements. To put it another way: to increase the value of the product.

Nobody should make any change which does not increase the value of the product, as defined by the requirements. If there's something you need to change that doesn't increase the value then either:

Either way, it's not acceptable to make a change which isn't justified by the requirements. In the latter case both the requirements and the product need to change.

If your model of value is good enough you can weigh the cost of making each change against the increase in product value.

There must be a stimulus for any change to the product. I'm going to call that stimulus an issue. An issue is some kind of record that the product doesn't do what the customer wants. This is either because it doesn't meet its specification (a defect) or because the specification doesn't say what the customer wants (an enhancement, or new requirement). There isn't a lot of difference as far as the customer is concerned: the product doesn't do what they want, and they'd like that changed.

Issues can come from many different sources. They can be generated by the planning process: every unmet requirement is an issue. They can be generated by testing and customer support: every defect is a requirement not to do something, and so is an issue as well.

You may choose to represent some of your issues as records in some database system. Some of them might be noted in design documents or plans. You need to understand where they can come from and where they reside.

Treat "enhancement requests" and "defects" in the same way. Put them through the same process. There really isn't any difference between them. Both need analysis, design, estimation, development, and testing. New development and defect fixing need to be planned together and have the same kind of cost benefit analysis applied.

Analyzing an issue produces a proposal to change the product. The change should be developed on a branch of the product sources, and only when it has been checked should it be merged. In small projects, or for low-risk changes, a "branch" can simply be a checked out copy of the relevant source files, provided that the work can be checked. For any project involving more than a few people, and any change that might put the product value at risk, a real branch should be made. If your source control system won't allow you to make many branches, throw it away and get one that will. Branches allow teams to work together on a problem, and isolate their work so that the product is at no risk until a genuinely improving change to the product is ready.

Change branching diagram
Figure 7. Change branching

Checking is a vital step. The purpose of checking is to ensure that (a) the change addresses the issue, solving the problem or meeting the requirement, and (b) that the change is indeed an improvement to the product, as measured against the requirements as a whole. Checking is when senior engineers can examine a change to make sure it maintains the product integrity (which should be a requirement, if you care about it). It's an excellent time to apply verification steps such as peer review [Gilb 1993], and of course, testing.

8. Incremental development

When I present this process to organizations, I'm often asked how developers are supposed to make changes that take longer than three months. My answer is always "don't do that".

The value of a change that takes a long time to develop isn't realized until the end. Often, the requirements change and the value of the change is diluted, so the change is worth a lot less by the time you get it finished. This is a major risk. It's basically the same as "big bang delivery" at a smaller scale.

It's much less risky to develop incremental improvements, each of which has independent value, even if the total amount of work required appears greater. In fact, it's likely that the requirements will shift and the later stages will be different than you imagined anyway. You can't afford changes that take a long time to develop when requirements are changing rapidly. And requirements are changing rapidly even in companies which are planning products several years ahead.

Another reason to avoid long changes is that they result in big change to the product sources. Bigger changes introduce larger numbers of defects. It's much harder to verify big changes, and so they inevitably decrease product value, often more than they increase it by solving whatever problem they originally set out to solve.

As mentioned in section 3, software development is a highly variable process. Long changes have even greater variance than short ones. The longer something takes, the less we can predict when it will be complete. Meanwhile, the customer wants their problems solved, and solutions delivered predictably. Shorter changes control the variability.

Evolutionary delivery, then, can have a significant impact on the way you design your product, as well as on the way you plan, develop, and deliver it.

Is it always possible to break down a big change into smaller increments? In my experience, yes, it is, though it sometimes takes some creative lateral thinking [Gilb 1988, p106, §7.13.1]

9. Versions and releases

When a customer finds a defect in the product they usually want a fix as soon as possible. They also want a fix that isn't risky, and that means one that introduces minimal change to the product specification.

The "quick fix" solution to a problem is often not the right solution for the product in the long term. Quick fixes often need to go in without a great deal of thought for consistency with the overall product direction. It's important to separate these fixes from the proper solutions to problems, so that the product doesn't degrade into a pile of hacks.

We can quickly resolve a customer's problem by patching the release that they already have. This means making the smallest and quickest change that will fix their problem. This is better than trying to ship them something built from the latest master sources because that may have changed in other ways which will cause the customer problems. Shipping the latest master sources is also risky. They might contain changes that are incompatible with the customer's environment. They also don't have a known specification (so you can't tell the customer what they're getting) and can't easily be maintained later. They probably haven't been thoroughly tested since the last version. Patching a stable release gives quicker and higher quality results. This is the main motivation for version branches (see figure 8).

Figure 8. Version branches

A version branch is created by taking a branch of the master sources when they are believed to meet the version specification. (Or, sometimes, when the deadline has been reached and something has to go out even if it doesn't quite meet the planned specification.) The sources on the version branch are called version sources.

Releases of a version of a product are created from fixed points on the version sources (see figure 8). Releases of the product are only ever made from the sources on the version branch. Source control labels are used to mark the sources from which a release was made, so that it can be reproduced. In addition, the product image (the thing distributed to the customers) is also archived.

Releases never change. The exact content of a release is committed at the point at which the source control label for that release is created. Any problems with the release must be solved in the next release.

Not all releases go to customers. Releases can be created for internal use, as a way of assessing what the quality of the product would be. This is especially true of the first release on the version branch, in which testing sometimes reveals defects that will need to be patched on the version branch.

10. Fixing defects

Only changes that fix defects are done on the version branch. A defect is where a release of the product doesn't meet its version's specification. This definition includes most things that are commonly called "bugs", most of which cause the product to fail to carry out one of its functions. In some sense, the version branch evolves towards its specification in the same way that the master sources evolve towards the requirements (see figure 9).

Figure 9. Fixing defects

Most importantly, the version sources are a known quantity. Products built from them have been thoroughly tested and released, and so minor changes should produce predictable results. For this reason, it's important that they aren't changed very much, and any fixes done should be as small as possible to solve the specific problem that the customer has. The aim is to get the problem fixed quickly, but without introducing other problems.

Of course, if a defect is found in a version it probably also exists in the master sources as well, and needs to be fixed there too (see figure 9). The fix that is done in the master sources should be a more carefully planned change. Mainly, it needs to be a maintainable change, properly documented, and respecting the architecture of the product. This change has to last indefinitely, whereas the change to the version branch only has to last until the customer upgrades to the next version.

This apparent duplication of effort shouldn't be seen as a waste but as an opportunity. Most of the work should be done in the analysis of the issue to work out what should be done. A bandage is then applied to the version sources to fix the problem, while a complete cure is effected to the masters. A "quick fix" might be to describe a workaround to the customer.

11. Conclusion

In this paper, I've described some of the best methods I've found for managing software product development, and how these relate to the Capability Maturity Model for Software Engineering [SEI 2002-08a].

There's a lot I haven't covered: testing, peer review, change control boards, tools, implementation details, and so on. Many of these things are specific to an organization or project. So instead I've tried to cover the important principles that will create productive teams and valuable products.

There's a lot I haven't said about how this method affects the software design and its impact on the working environment and developer attitudes. There are also a lot of small lessons that we've learned over the years and incorporated into our process handbooks. Many of these are specific to our organization and the problems of our software. The point is that the organization is learning by using an information system and a defined software process that can change and evolve just like the software itself. The process, like the product, is never finished until it is dead, and its successors have moved on.

A. References

[Gilb 1988] Principles of Software Engineering Management; Tom Gilb; Addison-Wesley; 1988; ISBN 0-201-19246-2. Dense with ideas and difficult to absorb, but with excellent content, and a wealth of practical experience implementing evolutionary delivery.
[Gilb 1993] Software Inspection; Tom Gilb, Dorothy Graham; Addison-Wesley; 1993; ISBN 0-201-63181-4. The best and most powerful form of review I have come across. I highly recommend it, or something like it, as your implementation of peer review. It should be applied to high level documents as well as designs and code.
[McGregor 1960] The Human Side of Enterprise; Douglas McGregor; McGraw Hill; 1960; ISBN 0-070-45098-6. The origin of "Theory X" and "Theory Y" descriptions of management, with Theory Y emphasizing sharing of objectives and responsibility, rather than issuing of commands, in order to get the best out of employees.
[RB 1999-05-20] Product Quality through Change Management; Richard Brooksby; Ravenbrook Limited; 1999-05-20; < doc/1999/05/20/pqtcm/>. Covers more ground than this paper, but in less depth. Contains implementation details suitable for smaller organizations.
[Reinertsen 1997] Managing the Design Factory; Donald G. Reinertsen; The Free Press; 1997; ISBN 0-684-83991-1. Not really a book about software management, which makes it all the more valuable. Excellent insights into the human and economic aspects of design.
[SEI 2002-08a] CMMI for Software Engineering (CMMI-SW, V1.1): Staged Representation; CMMI Product Team; Software Engineering Institute; 2002-08; CMU/SEI-2002-TR-029, ESC-TR-2002-029; < publications/documents/ 02.reports/02tr029.html>. An excellent reference and roadmap for process development. Far superior to its predecessor, the CMM 1.1. Should never be treated as an end in itself.

B. Document History

2003-03-09 RB Talk outline based on previous notes, developed between Southampton and Cambridge.
2003-03-10 RB Completed some text in whole outline. Wrote conclusions.
2003-03-11 RB Created HTML document from outline.
2003-04-22 RB Redrafted entire text from outline. Added figures.
2003-04-23 RB Merged diagrams into single OmniGraffle Pro document that can also be used as a presentation to go with the paper. Regenerated diagrams. Added requirements table example. Fixed up cross references. Added contents. Fixed up styles.
2003-04-24 RB Worked in paragraphs about the "cost of delay" and "strategic requirements". Removed obsolete section on testing. Improved conclusions. Added requirements flow and change branch diagrams. Updated after review by NDL. Incorproated original footnotes from PQTCM. Added comments on references.
2003-07-02 RB Fixed minor HTML error.
2003-07-09 RB Updated to use external common Ravenbrook white paper stylesheet.
2003-07-18 RB Fixed date of [Gilb 1993]. Clarified comments about [Gilb 1988].

C. Copyright and Licence

Copyright © 2003 Ravenbrook Limited. This document is provided "as is", without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this document. You may make and distribute verbatim copies of this document provided that you do not charge a fee for this document or for its distribution.

$Id: // $