STEPS TO DEFECT TRACKING INTEGRATION Richard Brooksby, Ravenbrook Limited, 2000-02-02 1. INTRODUCTION This document lays out the steps involved in the Defect Tracking Integration Project for Perforce Software, which has come about as a result of recommendations in "Options for Defect Tracking" [RB 2000-01-24]. The purpose of this document is to allow the work to be planned and estimated , so that the plan can be tracked, and the project will produce good quality results on time. Sounds easy, doesn't it. I firmly believe that simply throwing an engineer at this problem will not yield good results. The trick is to apply an appropriate amount of process to guide the work along without imposing too much of an overhead. This document recommends steps which I believe Perforce should carry out. In my experience these common sense steps yield good results. The language may seem daunting to a small software company, but everything here boils down to common sense engineering. This document is intended for Perforce Engineering Management, but could be made available to all staff. This document is not confidential. 2. OVERVIEW 2.1. Evolutionary Planning I recommend evolutionary planning for software projects. For a full argument about why this is a good idea, see "Product Quality Through Change Management" [RB 1999-05-20, section 4]. For this project I recommend that the first version be made available by the end of the year. This means making a cycle through all the steps (section 2.2) in about ten months. This should give enough time to meet the primary requirements. (The requirements met by the first version should be varied as the project moves forward, not the date. Do not attempt to "do it until done".) A beta program can then take place at the end of 2000. I recommend further versions every two to three months. Note that a key principle required for evolutionary planning is that all solutions chosen should be as general, flexible, and adaptable as possible within the constraints of their requirements. A general standard database interface to Perforce is an example of such a solution, though it needs analyzing against actual requirements when they are discovered (section 4). 2.2. The Steps These are the basic steps for that this project will go through. Note that these steps often overlap a great deal, and there's usually iteration and feedback between them. It's unnecessary and often harmful to try to complete one stage before beginning the next, as any experienced engineer will tell you. 1. Define the goal. 2. Gather requirements. 3. Define the architecture. 4. Detailed planning and design. 5. Implementation. 6. Testing and debugging. Effort usually divides up as follows: one third in steps 1 to 4, one sixth in stage 5, and one half in stage 6 [Brooks 1995]. 3. DEFINING THE GOAL The purpose of goal definition is to make it clear what you're trying to achieve. All further decisions made in the project must be justifiable in terms of the goal. That's how you make project decisions. The goal must answer these questions: 1. Why is Perforce Software integrating with defect tracking software? 2. What is integration supposed to achieve for Perforce Software? 3. How will we know when we've achieved it? These questions are partially answered by my report "Options for Defect Tracking" [RB 2000-01-24]. But I believe the answers need further work. Here are the goals I think you have, based on my research in the past two weeks at Perforce Software. These should be carefully reviewed by Perforce. Getting these wrong will cost you dearly. 1. To meet customers demands for integration. This should decreases pressure on Perforce from customers for integration. We'll know we've achieved this when customers should stop asking for integration. 2. To contribute to Perforce's financial success. This should increase Perforce's sales. Perforce should notice an increase in rate of sale as a result of launching integrations. Even better if you can measure the reasons for purchase and notice that integration is one of them. 3. To make Perforce suitable for more organizations. This will contribute to Perforce's goal of becoming the dominant and standard software configuration management system. Perforce should notice a broadening of its customer base. 4. GATHERING THE REQUIREMENTS 4.1. What are Requirements? The requirements define clearly, unambiguously, and measurably what will achieve the goal. The idea is that if a solution satisfies the requirements then the goal will be achieved. Quality is defined by meeting requirements. Acceptance tests are designed from the requirements. Architecture, designs, and code are reviewed against the requirements. 4.2. Classes of Requirement The first project cycle should concentrate on the critical requirements. A critical requirement is one which, if not met, causes the project to fail to meet its goals. Meeting all the critical requirements in the first pass is a very good thing. Other classes of requirement are "essential", meaning that they are needed but negotiable, "optional", meaning they would add value but could be dropped without negotiation, and "nice", meaning that they should only be met if they happen to drop out nicely from work on some other requirement. 4.3. Customer Requirements Critical requirements should be gathered by interviewing existing customers. There are four classes of customer we should interview: 1. Some high end customers (perhaps from the set SAP, Lycos, Google, Palm, Dolby, Adobe, WebLogic, WebTV), as these are the kinds of customers that you want to encourage. 2. Some customers who are representative of the bulk of your sales. 3. People who've already done integrations of some sort, in order to understand the requirements that they were trying to meet, and perhaps gain access to their work. 4. A couple of whacky outliers, so that you can understand the broad range of the requirements. Outliers are a good source of "nice" requirements (section 4.2) which can improve the choice of architecture. Some of the questions we need to answer to get the critical requirements are: 1. What does "integrated" mean to you? What do you want from integrated SCM and Defect Tracking? Why do you want them integrated? What problems does this solve for you? 2. Do you want a complete integration or just an interface to make your own integration easier? Can our integration be flexible enough to meet your needs? What kind of interface? 3. With which systems do you need Perforce integrated? In which environments? The set of questions can be developed further in preparing for the interviews, and also tends to develop as you work through them. 4.5. Technical Requirements A second set of requirements, the technical requirements, will need to be discovered by studying the defect tracking systems with which we are required to integrate. These will mainly answer the question of what protocols and interfaces are needed to integrate. We may be able to gain co-operation from defect tracking system vendors at this stage. 4.6. Project Requirements Last but by no means least, Perforce needs to define various project resource requirements. What resources will be available for the project? What's the budget? What is the deadline for delivery? 5. DEFINING THE ARCHITECTURE The architecture is a document that basically describes how the system will work and meet the requirements. The architecture should attempt to meet all the requirements, even those which will not be met by the first version. It needs to anticipate future requirements. It needs to be as flexible, open-ended, general, and adaptable as possible so that it can anticipate unknown future requirements and cope with changing requirements [RB 1999-05-20]. A possible architecture for this project is to add an SQL interface to Perforce. The SQL interface would give a simplified relational view of Perforce's internal database, rather than direct access, in order to allow for future internal development without change to the interface specification. Then, on the outside, glue would be created which interfaced from the SQL interface to the databases used by the defect tracking tools. Perforce would create glue for some popular systems, but publish the interface so that others could develop their own. This architecture, and any other proposals, will need to be reviewed against the requirements (section 4). 6. DETAILED PLANNING AND DESIGN The next stage is to decide what will be done (requirements will be met) for the first version and perhaps one or two versions beyond that. Then to plan out and estimate the work involved, and come up with more detailed designs for the parts of the architecture that are needed. Quite often the designs here aren't final, but are what are needed to meet the critical requirements only, but with a view to replacement or modification later. This avoids wasting effort on non-critical requirements that are likely to change after the first version. This happens a lot. This stage also overlaps a great deal with implementation, because detailed designs usually iterate with implementations, and sometimes prototypes are needed to work out a good design. I don't think it would be valuable to try to predict the detailed plan for defect tracking integration at this stage. 7. IMPLEMENTATION Coding, hacking, knocking out the lines. The fun part. I don't think it's worth trying to predict much about this, except to say that it's much easier to get this right if you've the steps above, and it takes a lot less time. 8. TESTING AND DEBUGGING Note that this means applying the tests, not developing them. Test development should begin immediately after requirements are determined, and continue throughout the project. The most important tests initially are the acceptance tests, which are tied tightly into the requirements and make sure that they are met. It might be worth considering an alpha release (i.e. pre-tested) program to selected customers. This would have to be handled carefully. 9. ESTIMATES AND SCHEDULE 9.1. Estimates Here are some rough estimates for the effort (not time) involved in implementing these steps. It's hard to estimate well before the requirements are understood, so these are largely guesswork based on my understanding of the problem and the most likely solution we've imagined so far. Goal definition and review 1w Identify requirements interviewees 1w Set up requirements interviews 1w Interviews and follow-up 4w Customer requirements definition 2w Investigating defect tracking systems 4w Architecture definition and review 4w Detailed design and planning 8w Implementation 8w Test design and development 8w Testing and debugging 16w Management, tracking, and oversight 16w Beta program management 4w This puts the total at 77 weeks, which is roughly consistent with my estimates in "Options for Defect Tracking" [RB 2000-01-24]. There could easily be plus or minus 50% error in these estimates. The trick is to use evolutionary planning to bring the software out on time in any case, by trimming the requirements and keeping the design flexible. 9.2. Schedule This is a rough schedule of when the project steps would occur, based on having one person working on project steps 1 to 3 initially, then bringing in development and testing on steps 3 onwards. 2000-02 Goal definition and review 2000-03 Identify requirements interviewees Set up interviews 2000-04 Interviews Customer requirements definition Investigation of defect tracking systems 2000-05 Ditto 2000-06 Round off requirements Architecture definition Test design and implementation 2000-07 Architecture review Detailed design and planning Test design and implementation 2000-08 Design and implementation 2000-08 Ditto 2000-09 Ditto 2000-10 Design and implementation Testing and debugging Alpha program? 2000-11 Testing and debugging 2000-12 Testing and debugging 2001-01 Testing and debugging Beta program 2001-02 Testing and debugging Rollout! 9.3. Variations and Review There's scope for early releases of particular integrations, but the plans for these need to be worked out once the requirements are better understood. There's a lot of scope for other changes to the plan at that stage too, and I recommend a complete review of the plan at the end of 2000Q2, which is when the development really gets going. 9.4. Testing, Documentation, and Support These deserve more than a small section at the end. Acceptance test development should begin as soon as requirements are known. Acceptance tests are the black box tests that determine whether requirements are met. A lot of these won't be programs. White box design coverage tests should be developed alongside the design and implementation stage. Regression tests should be developed to cover all the defect discovered during implementation, testing, and debugging, and of course, after rollout. Documentation should begin as soon as the specification of the first version is decided. It should be developed in parallel with the design and implementation. The user documentation is usually the most thorough document of the specification. Likely support problems can be anticipated as the architecture is decided, and can be refined when the specification is known. Support effort should be put into workout out what's likely to go wrong for customers and anticipating this in documentation or in materials used by the support staff. 10. CONCLUSION This is a feasible project for Perforce software which will take about ten months. This outline of a plan is by no means rigid, but I believe it is about right based on my experience and what I understand of Perforce Software and its product and people. A. REFERENCES [Brooks 1995] "The Mythical Man-Month: Essays on Software Engineering"; Frederick P. Brooks Jr.; Addison-Wesley; 1995; ISBN 0201835959. "The more I work in this field, the more of this book I discover to be true." -- Richard Brooksby. "You should read this book." -- Richard Brooksby. [GDR 2000-05-11] "Defect Tracking Project Meeting, 2000-05-11"; Gareth Rees; Ravenbrook Limited; 2000-05-11. [Gilb 1988] "Principles of Software Engineering Management"; Tom Gilb; Addison-Wesley; 1988; ISBN 0-201-19246-2. [KPA 1.1] "Key Practices of the Capability Maturity Model, Version 1.1"; Mark C. Paulk, Charles V. Weber, Suzanne M. Garcia, Mary Beth Chrissis, Marilyn Bush; Software Engineering Institute; 1993-02; CMU/SEI-93-TR-025, ESC-TR-93-178; . [RB 1999-05-20] "Product Quality through Change Management"; Richard Brooksby; Perforce User Conference, 1999; 1999-05-20; HTML , PDF . [RB 2000-01-24] "Options for Defect Tracking" (report for Perforce Software); Richard Brooksby; Ravenbrook Limited; 2000-01-24. B. DOCUMENT HISTORY 2000-02-02 RB First draft, partly based on discussions with Leslie Smith. Reviewed with Fanny Nudo. 2000-02-03 RB Modified after discussion with Nigel Chanter. Tidied up. Reviewed with Leslie Smith. Added sections on testing, documentation, and support involvement. Delivered. 2000-05-25 RB Marked as "not confidential" as agreed with Perforce [GDR 2000-05-11, 5]. Copyright (C) 2000 Ravenbrook Limited. This document is provided "as is", without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this document. You may make and distribute verbatim copies of this document provided that you do not charge a fee for this document or for its distribution. $Id: //info.ravenbrook.com/project/p4dti/doc/2000-02-02/steps/index.txt#1 $