Ravenbrook / Projects / Perforce Defect Tracking Integration / Project Documents

Perforce Defect Tracking Integration Project


Analysis of architectures for defect tracking integration

Gareth Rees, Ravenbrook Limited, 2000-05-30

1. Introduction

This document analyses selected architecture proposals [GDR 2000-05-08] for defect tracking integration against the requirements listed in [GDR 2000-05-24] (and originally derived from [RB 2000-05-05]). The proposals are anaylzed in some detail [2], and then the analysis is summarized in a table [3].

The purpose of this document is to understand the implications of each architecture proposal (in terms of whether and how it can be made to meet each requirement) so that the best architecture can be chosen for the project.

The intended readership is Perforce senior technical staff and anyone working on architecture evaluation and on the design of the defect tracking integration.

This document is not confidential.

The project review meeting agreed on three proposals to concentrate on [GDR 2000-05-11, 3.8]: the replication [GDR 2000-05-08, 3.2], union [GDR 2000-05-08, 3.3] and single-database [GDR 2000-05-08, 3.4] architectures. These proposals are analysed here. The tracker-client architecture [GDR 2000-05-08, 3.1] is also analysed, because it is planned to be delivered as part of whichever architecture is chosen.

The analysis uses the impact estimation technique described in [Gilb 1988, 11]. Each proposed architecture is evaluated against each requirement, to derive an estimate of whether (and how well) the solution meets that requirement, and to generate design decisions that will be needed to meet the requirement. When an architecture is chosen, the design decisions will be used as the basis for the project design.

Estimates of effort use Brooks' rule of thumb [Brooks 1995, page 20]: 33% of effort is planning, 17% coding, 25% unit and integration testing, 25% system testing. Since I generally account for planning and design separately from development, the development estimates should be understood to be 25% coding and 75% testing.

The most significant requirements are 1, 21, 76 and 77.

2. Analysis

2.1 Requirement The defect tracker state must be consistent with the state of the product sources.
2.1.1 Repl.

The replication architecture achieves this by replicating defect-related data from Perforce's database to the defect tracker's database and vice versa. The data to be replicated includes Perforce jobs (corresponding to defect tracking issues) at the very least, and may also include users, groups, and permissions (2.56.1), changelists and fixes (2.5.1), and a proposed job/filespec relation (2.43.1).

Fields are added to the replicated relations to elp the replication daemon do the right thing. Fields are added to the jobs relation (a) to indicate if the job is to be replicated and if so, to which defect tracker (2.97.1); (b) to indicate to which defect tracking issue the job corresponds to.

The replication daemon distinguishes changes made by users of either system, and changes made by the replication daemon in replicating an entity (otherwise the replication daemon may replicate its own changes). There may therefore be extra fields added to the jobs relation to indicate whether each job has been replicated or not. (The same remarks apply to the users, groups, permissions and job/filespec relations but probably not to the changelist and fixes relations since these are replicated in one direction only).

Race conditions mean that the databases may become inconsistent. The replication daemon quickly detects each inconsistency and takes appropriate action [GDR 2000-05-08, 3.2.4].

Estimating the rate of inconsistencies

To estimate how much inconsistency there will be, and therefore how often an administrator may be required to make an intervention, I will build a model of defect resolution activity that predicts frequency of inconsistency as a function of database size and load.

An inconsistency arises when an entity is changed simultaneously in both the defect tracker and in Perforce, where "simultaneously" means "within the latency of the replication daemon". For example, a manager might reassign a task while simultaneously the original developer successfully makes a change which should not have been permitted.

Suppose there are d active defect records in the system, that there are u users of the integrated system (suppose that half use Perforce, half use the defect tracker) and each user makes c changes each working day, distributed uniformly across the active defects and throughout 8 working hours. Let the latency of the replication daemon be l seconds. Then there are 28800/l latency periods in the 8 working hours (since 8 hours is 28800 seconds).

Hence there are 28800d/l combinations of latency period with defect record, and in cu/2 of them someone is making a change in that record at that time in the defect tracker. So the chance of each change in Perforce hitting one of those combinations is cul/57600d and the expected number of inconsistencies created in a day is

c2u2l/115200d

Here are some predictions made by this simple model:

Scenario Changes per day per developer, c Active defects, d Latency of replication daemon in seconds, l Users, u Expected inconsistencies per day
a 10 100 300 10 0.26
b 10 100 5 10 4.3 × 10-3
c 10 100 1 10 8.7 × 10-4
d 40 1000 300 100 42
e 40 1000 5 100 0.69
f 40 1000 1 100 0.14

Scenarios a to c represent a medium sized organization with 10 people involved in defect resolution, each making 10 changes per day to 100 active defects. In scenario a, the replication daemon polls the servers every 5 minutes for new changes to replicate, so the latency is 300 seconds. In scenario b, the replication daemon is triggered by the servers whenever a change happens, and completes the replication in 5 seconds, and in scenario c, the replication daemon completes the replication in 1 second.

Scenarios d to f represent a large organization with 100 people involved in defect resolution, each making 40 changes per day to 1000 active defects. The latencies are as described above for scenarios a to c.

It is clear that inconsistency will be a serious problem for large organizations if replication has a high latency.

The activity in the system is likely to be clustered at certain times, for example when a manager starts working on the defect tracking system and asks everyone to "bring things up to date" and "where is such and such?". Or if there are scheduled builds and everyone races to submit their changes in time for the build. This factor may make the rate of inconsistencies higher than the model suggests.

On the other hand, people do not work in the random fashion suggested by the model. Any sort of workflow discipline means that only one person is working on a defect at a time. It is only the manager who is outside the workflow who can create inconsistencies. This factor may make the rate of inconsistencies lower than the model suggests.

So I need to work out the common scenarios by which inconsistencies are introduced and make a more detailed model that incorporates these scenarios.

It appears from the simple calculations above that it will not be acceptable for the replication daemon to poll the servers. To make the latency small (and hence make the rate of inconsistencies small) the servers trigger the replication daemon. One consequence of this is that the Perforce server has triggers that run whenever any defect-related data changes. This is why triggers are included in the consideration of requirements 21, 76 and 77.

The predictions of the model suggest that the replication daemon needs performance tuning when it is installed. The documentation covers this.

Using with tracker-client architecture

The replication architecture is used in parallel with the tracker-client architecture (2.39.1).

Consider the case where a developer using the defect tracker's interface simultaneously submits a change and completes a task. From the point of view of the defect tracker this is two operations: submit the change (using the Perforce client API) and complete the task (using its own API and eventually its own database). The defect tracker waits for the submit operation to complete before it can close the task. But there is a race condition: the replication daemon may replicate the closing of the task to the defect tracker's database before the defect tracker's client can close the task. So the client probably gets a "cannot close task: task already closed" error message. This is both annoying for the user and can lead to incorrect database state since there was probably extra information related to the closed task that was available to the defect tracker's client but not to Perforce or the replication daemon that consequently is not in the defect tracker's database.

However, there is no problem if the defect tracker omits to close the task via the submit operation. In this case the replication daemon replicates the association between the changelist and the task, but the task remains open for the defect tracker's client to close.

So we need to instruct developers of tracker-client integrations not to close tasks via the Perforce client interface if they plan to close them using the defect tracker's interface.

Bringing things up to date

If one server or the other is down, the replication architecture allows defect resolution activity to continue on the other server. If the network connection is down, the replication architecture allows defect resolution to continue in parallel in both systems. In these circumstances the replication daemon fails to replicate the changes (there may be some way to set a policy option for whether it should silently fail or alert someone?). But when the other server (or the connection) comes back up, the replication daemon takes all the changes that have taken place, replicates them and notes inconsistencies for an administrator to fix.

2.1.2 Single

The single-database architecture achieves this by storing each defect-related entity only once, in the defect tracker's database.

The defect tracker makes queries and changes only to its own database. This means that it does not use a two-phase commit protocol to ensure consistency.

Perforce makes transactions that affect both databases, so uses a two-phase commit protocol to make sure that the databases remain consistent. This has the consequence that transactions in Perforce which affect defect-tracking data (and this includes every submit) may be slow, because Perforce locks the affected data in both databases (one possibly across a network connection) before it can start its transaction. There is also the risk of deadlock if (for example) the network connection to the defect tracker's database goes down while Perforce has some records locked.

If the defect tracker's database does not provide transaction support or record locking then it will not be possible for Perforce to do two-phase locking and thus maintain consistency when the defect tracker is updating its database simultaneously with Perforce.

This need for Perforce to implement a two-phase commit protocol to ensure consistency means that effort will be required to implement it (2.76.2).

The single-database architecture is used in parallel with the tracker-client architecture (2.39.2). A similar consequence to that for the replication architecture follows: we instruct developers of tracker-client integrations not to close tasks via the Perforce client interface if they plan to close them using the defect tracker's interface. (The reason is slightly different: in this case there is no race condition to worry about because Perforce always updates the defect tracker's database).

Note that in the single-database architecture, defect resolution using Perforce cannot continue if the defect tracking database (or the network connection between them) is down.

2.1.3 Union

The union architecture achieves this by storing each defect-related entity only once.

The database unifier provides distributed transactions as part of its unified view of the two databases, so that the databases remain consistent. By this I mean that it offers transaction support for the combined databases - when a change affects both databases it either happens in both or doesn't happen at all. And other systems accessing data in either database while the transaction is being prepared see a consistent state, not a half-complete change. This means doing two-phase locking on the resources in both databases (with the same consequences as for the single-database architecture if the defect tracking database does not support transactions).

In the absence of transaction support in the defect tracker's database, the database unifier could attempt to enforce consistency by serializing all transactions, but this will result in a performance impact (how much?). It also won't work when there are other systems accessing the defect-tracking data in the database (unless these other systems also use the database unifier, but that is beyond the scope of this project).

The union architecture is used in parallel with the tracker-client architecture (to meet requirement 39). A similar consequence to that for the single-database architecture follows: we instruct developers of tracker-client integrations not to close tasks via the Perforce client interface if they plan to close them using the defect tracker's interface.

Note that in the union architecture, defect resolution using either Perforce or the defect tracking system cannot continue if the other system (or the network connection between them) is down.

2.1.4 Tracker

The tracker-client architecture cannot by itself keep the databases consistent, since changes to defect-tracking entities made through Perforce's interface do not cause corresponding changes in the defect tracker's database. Recall that there is a requirement that developers can use the Perforce interface (2.41.4). For example, a developer might check in work that completes a task, but this does not record anywhere that the task is completed, so the database is inconsistent until the developer goes to the defect tracker's interface and marks the task as completed.

The tracker-client architecture will be used in parallel with whichever other architecture is chosen. See the discussion for the other architectures (2.1.1, 2.1.2, 2.1.3) for the consequences.

   
2.2 Requirement The defect tracking integration must make the jobs of the developers and managers easier (i.e. make it easier for them to produce a quality product etc.).
Analysis See 2.3, 2.5, 2.38, 2.41 and 2.60.
   
2.3 Requirement It must be easy to discover why the product sources are the way they are, and why they have changed, in terms of the customer requirements.
2.3.1 Repl.

The fixes relation in Perforce associates a change with a job. So to discover why a particular file is the way it is using the Perforce interface, a user finds the changelists that contributed to the file, queries the fixes relation to discover the jobs which caused the changes to be made, and looks at the job descriptions to discover the defect or customer requirement that motivated the changes.

In order for this to be reliable, developers need to work in a disciplined fashion - they need to record the reason for each change. The integration makes it easy for developers to work in this disciplined way.

A field is added to the fixes relation to record the nature of the association between the job and the change (for example, the change solves the defect, or contributes to the solution, or is a regression test for the defect, or backs out an incorrect solution, and so on). The field takes user defined values.

2.3.2 Single See 2.3.1.
2.3.3 Union See 2.3.1.
2.3.4 Tracker With the tracker-client architecture, the state of the product sources are not consistent with the information in the defect tracking system (2.1.4) so it is not possible to obtain accurate information about why they are the way they are.
   
2.4 Requirement The interface that allows Perforce to be integrated with defect tracking systems must be public, documented, and maintained.
Analysis See 2.23 (public), 2.22 (documented), and 2.30 (maintained).
   
2.5 Requirement The integration must provide the ability to ask questions involving both the defect tracking system and the SCM system.
2.5.1 Repl.

In the replication architecture, SCM data is replicated into the defect tracker's database, and defect tracking data is replicated into Perforce's database. This means that queries involving both systems can be made by making ordinary SQL queries of the defect tracker's database. The Perforce client provides interfaces for making these queries of Perforce's database.

If queries combining defect tracking and SCM data are to be made by making SQL queries of the defect tracker's database, it will be necessary to replicate all relevant relations from Perforce to the defect tracker's database. This includes the changes and fixes relations, and the proposed relation between jobs and filespecs. (Note that as long as it is possible to create tables in the defect tracker's database, it should be easy to replicate changes and fixes, since they only get added to, never updated.)

Note that because there is a delay after making a change to one database before the change is replicated to the other database, a query made of one database suring this period of delay will get an incorrect answer. There may therefore be a need to prevent further changes to the database and wait for existing changes to be replicate before making a query. (Alternatively, there is some way of telling whether the result of a query was consistent, that is, did not fall into the period of delay before replication of some change.)

2.5.2 Single In the single-database architecture, some SCM data (for example, Perforce jobs) is stored in the defect tracker's database. Some queries involving both systems can be made by making ordinary SQL queries of the defect tracker's database. However, queries involving SCM data that is not stored in the defect tracker's database (for example, changelists and fixes) will involve making separate queries of the defect tracker's database and Perforce and then writing a program to combine the results. This may be sufficiently difficult that it would be easier to copy all relevant data into the defect tracker's database before issuing the query - but this is what the replication architecture does (2.5.1)
2.5.3 Union In the union architecture, the database unifier provides a unified view of both databases, so either system can issue SQL queries on data from both systems.
2.5.4 Tracker

In the tracker-client architecture, the tracker client can query the Perforce server, and combine the data returned with data from its own database. However, this is much more complex than making SQL queries of a single database and this complexity is probably prohibitive (one can imagine that the easiest way to do this is for the developer of queries to replicate the changes and fixes relations into the defect tracker's database and then make SQL queries of that database, but this is just what the replication architecture does (2.5.1).

It should also be noted that the state of the product sources are not consistent with the information in the defect tracking system (2.1.4) so queries of both systems will produce wrong answers.

   
2.6 Requirement Perforce should be integrated with TeamTrack by TeamShare.
Analysis See [GDR 2000-06-30, 3].
2.6.4 Tracker

The tracker-client architecture already works out of the box with TeamTrack and Perforce. TeamTrack supports the Microsoft Common Source Code Control API [Shaw 2000-06-05], and Perforce supports the SCC API [Perforce TN 34].

Note that Microsoft's SCC API is not an open standard [Bannister], so using it exclusively violates our requirement 23.

   
2.7 Requirement The integration should support all of the operations currently available in the TeamTrack integration with other SCM systems.
2.7.1 Repl. This can be done using the tracker-client architecture (2.7.4).
2.7.2 Single This can be done using the tracker-client architecture (2.7.4).
2.7.3 Union This can be done using the tracker-client architecture (2.7.4).
2.7.4 Tracker The tracker-client architecture already works out of the box with TeamTrack and Perforce (2.6.4).
   
2.8 Requirement The integration should use the TeamTrack C++ interface to the TeamTrack database.
2.8.1 Repl. The TeamTrack C++ interface is used by the vendor-specific replication module (2.21) to query the defect tracker's database.
2.8.2 Single The TeamTrack C++ interface is used by the "Perforce server defect tracking database module" (2.21) to query the defect tracker's database.
2.8.3 Union The union architecture intercepts communication to the database at a lower level than the TeamTrack C++ interface, so cannot use the interface. There is therefore some risk that entities fetched from the defect tracker's database will not be seen consistently by the defect tracker and by Perforce. (For example, if the defect tracker's interface changes the meaning of values in some fields.)
2.8.4 Tracker The tracker-client architecture does not use the TeamTrack C++ interface.
   
2.9 Requirement Perforce should be integrated with DevTrack by TechExcel.
Analysis See 2.10, 2.11.
   
2.10 Requirement The integration should use the DevTrack COM interface to the DevTrack database.
2.10.1 Repl. The DevTrack COM interface is used by the vendor-specific replication module (2.21) to query the defect tracker's database.
2.10.2 Single The DevTrack COM interface is used by the "Perforce server defect tracking database module" (2.21) to query the defect tracker's database.
2.10.3 Union The union architecture intercepts communication to the database at a lower level than the DevTrack COM interface, so does not use the interface.
2.10.4 Tracker The tracker-client architecture does not use the DevTrack COM interface.
   
2.11 Requirement The integration should support all the operations currently available in the DevTrack integration with Visual SourceSafe (Attach, Detach, Check In, Check Out, Undo Check Out, Get, View Detail, Label, Directory).
2.11.1 Repl. These operations are provided using the tracker-client architecture (2.11.4).
2.11.2 Single These operations are provided using the tracker-client architecture (2.11.4).
2.11.3 Union These operations are provided using the tracker-client architecture (2.11.4).
2.11.4 Tracker

To support the attach and detach operations in an integrated way, Perforce supports a relation between jobs and files or filespecs (2.39).

The other operations should be straightforward to support, since Perforce supports the Microsoft SCC API which is used by DevTrack to integrate with Visual SourceSafe.

   
2.12 Requirement The integration might support versioning of documents in DevTrack's document management interface.
Analysis Not analysed (since we do not propose to integrate with DevTrack initially).
   
2.13 Requirement Perforce may be integrated with Remedy ARS by Remedy.
Analysis Not analysed for replication, single-database or union architectures (since we do not have any documentation for Remedy ARS's public API, if any).
2.13.4 Tracker

The tracker-client architecture may work out of the box in Remedy, because according to [Remedy 4.5 FAQ], Remedy supports the Microsoft Common Source Code Control API: "Remedy has built an interface to the SCC API so we can talk to these tools and allow Remedy application developers to save their applications with version numbers and check in and check out code."

Perforce supports the SCC API [Perforce TN 34]. However whether Perforce really works together with Remedy needs to be checked.

Note that Microsoft's SCC API is not an open standard [Bannister], so using it exclusively violates our requirement 23.

   
2.14 Requirement The integration should use the Remedy API.
2.14.1 Repl. The Remedy API is used by the vendor-specific replication module (2.21) to query the defect tracker's database.
2.14.2 Single The Remedy API is used by the "Perforce server defect tracking database module" (2.21) to query the defect tracker's database.
2.14.3 Union The union architecture intercepts communication to the database at a lower level than the API, so does not use the Remedy API.
2.14.4 Tracker The tracker-client architecture does not use the Remedy API.
   
2.15 Requirement Perforce may be integrated with TRACKWeb Defects by Soffront.
Analysis Not analysed (since we do not propose to integrate with TRACKWeb Defects initially).
   
2.16 Requirement The integration should support all of the operations currently available in the TRACKWeb Defects integration with other SCM systems.
Analysis Not analysed; see 2.15.
   
2.17 Requirement The integration should use COM interface to the TRACKWeb Defects database.
Analysis Not analysed; see 2.15.
   
2.18 Requirement Perforce might be integrated with Bugzilla.
Analysis See [GDR 2000-06-30, 3].
2.18.1 Repl. Bugzilla does not have a database interface. This means that the vendor-specific replication module talks to the MySQL database directly, using MySQL's own interface or protocol. This means accepting a large risk that any future change to Bugzilla will break the integration.
2.18.2 Single See 2.18.1.
2.18.3 Union

Bugzilla uses a Perl interface to talk to MySQL using MySQL's own network protocol. The database unifier understands MySQL's protocol in order to intercept it.

2.18.4 Tracker We extend Bugzilla to provide a Perforce client interface.
   
2.19 Requirement Perforce might be integrated with Peregrine Systems' forthcoming defect tracking system.
Analysis Not analysed (since Peregrine Systems' defect tracking system is not available for analysis yet).
   
2.20 Requirement Non-Perforce people must be able to integrate Perforce SCM with defect tracking systems.
Analysis See 2.21.
   
2.21 Requirement The amount of effort required to extend the integration to work with a new defect tracking system must be less than 6 person-months, should be less than 2 person-months, may be less than 2 person-weeks, and might be one person-week.
2.21.1 Repl.

The replication daemon is the only part of the replication architecture which interfaces to the defect tracking system. It does so through the defect tracking vendor's database interface module [GDR 2000-05-08, figure 2]. The replication daemon should consist of a "vendor-independent replication module", which talks to the Perforce server, but which also uses the "replication interface" to talk to the "vendor-specific replication module" which talks to the defect tracking vendor's database interface module. Only the vendor-specific replication module is rewritten for each integration.

The developer of the integration learns enough about the integration kit and the tools it uses that they understand how the integration works and how to proceed. Estimated effort: 1 week. They also learn about the defect tracking system (since the requirement is that any third party can carry out the integration, not just people who are experienced with the defect tracker). Estimated effort: 2 weeks. (This learning will of course be interleaved with the steps outlined below.)

The developer decides how the relevant entities in Perforce's database (jobs, changelists, fixes, the proposed jobs/files relation) correspond to entities in the defect tracker's database. Estimated effort: 1 week.

For each corresponding entity, the developer determines which fields in one database map to which fields in the other, and writes code to transform the data in each corresponding field. I estimate 1 day per entity to write the conversion code and 3 days per entity to test it. So 16 days for this step.

The vendor-independent replication module should notice and handle inconsistencies (by using the vendor-specific replication module to transform the corresponding entities, and then comparing the results). So no effort here.

The developer of the integration also provides some way for the replication daemon to discover when the defect tracker's database changes. There are four ways of doing this:

  1. Database triggers in the defect tracker's database server. A trigger is written for each entity that may change. The trigger code uses COM (on Windows) or some protocol running over TCP/IP (otherwise) to talk to the replication daemon. Assuming that the database supports triggers that can make these external calls, and we supply the replication daemon with a module that allows it to respond to these triggers, I estimate 1 day per entity to write and 3 days to test each trigger, for 16 days effort.

  2. The replication daemon polls the defect tracker's database for changes. Estimated effort: 1 week to write, 3 weeks to test.

  3. The defect tracker's client notifies the replication daemon of changes. This will possibly involve adding notification code to every write to the database. The most effective way to do this is to add this code to the defect tracker's interface module. Estimated effort: 16 weeks.

  4. The defect tracker's client replicates changes itself, using the Perforce client API. This solution does not belong to the replication architecture, but to the variation of the tracker-client architecture [GDR 2000-05-08, 3.1.2].

Total effort required: from about 10 weeks to about 23 weeks.

2.21.2 Single

The developer learns about the integration kit and the defect tracker, as for the replication architecture. Estimated effort: 3 week.

A similar module to the one described for the replication architecture is required for the single-database architecture. This component is part of the Perforce server [GDR 2000-05-08, figure 5].

Since it is not possible to give third parties access to the Perforce server code, the Perforce server is rewritten so that it can be configured to use an open interface (the "Perforce server defect tracking database interface") by which it accesses and stores job-related data (see 2.76 for estimation of effort for this). This interface connects to a defect-tracker-specific module (the "Perforce server defect tracking database module") as outlined for the replication architecture. Such a module could be supplied as a shared library that is named when the Perforce server is started, or as a server that the Perforce server talks to across the network.

The effort required to rewrite this module for a new defect tracker is the same as required for the replication architecture: 1 week to design the correspondence between entities and 16 days to implement and test.

Note that the module as described runs on the same machine as the Perforce server, which may be on a Unix system while the defect tracker's interface module offers a COM interface which is only callable from a Windows system (is this true?). In this case we move the module to a Windows system and have the Perforce server talk to it across the network for each read and write. However, this is delivered as part of the kit and is be rewritten for each vendor.

Total effort required: about 7 weeks.

2.21.3 Union

I am assuming here that the database unifier intercepts the actual back-end database protocol used by the defect tracker (for example, ODBC, or SQL over TCP/IP, or some database-specific protocol such as MySQL's database protocol) rather than intercepting calls to the defect tracker's interface module. (The architecture proposal [GDR 2000-05-08, 3.3] is not clear about this.) If the unifier intercepts calls to the defect tracker's interface module it is hard to see how any significant reuse can be got out of this architecture, since a lot of the unifier is rewritten for each defect tracker.

Note that if all entities are stored in the defect tracker's database then there seems to be no point in considering the union architecture since it boils down to a complicated way to implement the single-database architecture. So some entities must be stored in Perforce's database for it to be worth considering this architecture at all (the fixes relation might be a natural candidate).

The developer learns about the integration kit and the defect tracker, as for the replication architecture. Estimated effort: 3 week.

I imagine the database unifier to be implemented using something like the following components:

  1. A fairly abstract internal model of how to represent database queries and database objects.

  2. A catalog detailing which entities live in which database and how the two views of them correspond.

  3. An entity mapper that can convert entities and fields from the two external formats to the internal format and vice versa.

  4. A front-end to the Perforce server which can convert Perforce entities to and from entities in the internal model.

  5. A back-end to the Perforce database which can convert entities in the internal model to and from Perforce entities.

  6. A front-end that understands the defect tracker's database protocol and which can convert database queries into the internal model and can convert results in the internal model into database replies. For example, this may involve a SQL parser.

  7. A back-end that understands the defect tracker's database protocol and which can convert queries in the internal model into database queries and can convert database replies into results in the internal model.

  8. Interfaces that cleanly separate the parts of the unifier that are private to Perforce from those that are public and configurable.

Components 1, 4, 5 and 8 are independent of the defect tracking system and can be supplied as part of the kit.

Components 2 and 3 are rewritten for each new defect tracking system. To write component 2, the developer analyses the entities and decide in which database they belong. This is substantially harder than deciding how entities correspond in the replication architecture, so I estimate 2 weeks. Component 3 is similar to the entity mapper for the replication architecture: estimated effort is 16 days.

Components 6 and 7 have to be rewritten for each database protocol. So this won't have to be done for each defect tracking system (for example, many defect tracking systems use ODBC to talk to the database), but it will have to be done for many. In the simplest case, much of the work for another defect tracking system may be re-used (for example, an SQL parser may already be available). Estimated effort: 16 weeks. There may however be no appropriate code available for re-use, so the database interface may have to be written from scratch. Estimated effort: 32 weeks. In the worst case, the defect tracker does not even have a protocol for accessing its database (for example, there may be no database server as in the case of using dbm as the back end). In this case the defect tracker is rewritten to use a database protocol, which the database unifier will then map back to the dbm calls. This may take another 16 weeks.

Total effort required: from about 8 weeks to about 56 weeks.

2.21.4 Tracker

The time taken depends on how many features of the Perforce client are implemented. The simplest set of features that make sense is probably: browse the repository ("p4 files" and "p4 dirs"); add a file to the repository ("p4 add" + "p4 submit"); check out files for editing ("p4 edit") and check in files ("p4 submit"). I estimate 4 weeks to specify, write and test each of these features, for 16 weeks effort.

Note that this implementation involves changes to the defect tracking client interface, so is done by the defect tracking vendor (except in the case of open source defect tracking systems like Bugzilla, but there is an issue of who will support any changes we make).

   
2.22 Requirement The integration design must be documented.
2.22.1 Repl.

This is straightforward. The replication daemon divides into a number of components with well-defined interfaces between them (2.21.1). The components and their interfaces can be easily documented.

The Perforce API is documented (2.22.4).

2.22.2 Single

This can be done. The single-database architecture divides into a number of components with well-defined interfaces between them (2.21.2). The components and their interfaces can be easily documented.

The single-database architecture relies on an interface internal to the Perforce server. This public interface will restrict Perforce's freedom to make server changes in the future.

The Perforce API is documented (2.22.4).

2.22.3 Union

The database unifier is very complex, and has a lot of database technology in it (2.21.3). Documenting the design will take a lot of effort.

The union architecture relies on an interface internal to the Perforce server (2.22.2).

The Perforce API is documented (2.22.4).

2.22.4 Tracker The Perforce API is documented. (This is straightforward, but needs to be done.)
   
2.23 Requirement The integration must be open.
2.23.1 Repl. This is straightforward since the replication daemon is decoupled from both the defect tracking system and the Perforce server.
2.23.2 Single There is an internal interface to the Perforce server that is public and documented. This places restrictions on future changes to the server (2.22.2).
2.23.3 Union There is an internal interface to the Perforce server that is public and documented. This places restrictions on future changes to the server (2.22.3).
2.23.4 Tracker The Perforce API is documented (2.22.4).
   
2.24 Requirement The integration must be implemented using readily available tools.
2.24.1 Repl. This is straightforward since the replication daemon is decoupled from both the defect tracking system and from Perforce. For example, the daemon could be written in a combination of Python and C/C++. It uses the defect trackers' APIs to query manipulate defect tracking entities, and the Perforce API to query and manipulate Perforce entities. These APIs are readily available. See also 2.82.1.
2.24.2 Single The integration is largely in C/C++ for speed (since every Perforce transaction involving jobs has to go through the integration). It uses the defect trackers' APIs to manipulate defect tracking entities. These APIs are readily available.
2.24.3 Union The integration is largely in C/C++ for speed (since every transaction involving Perforce or the defect tracking system has to go through the database unifier). The integration uses specialized tools or interfaces for each database it is ported to. These tools may not be readily available.
2.24.4 Tracker The integration will use the Perforce client API, which is readily available. But the software belongs inside the defect tracker's interface, which cannot be readily modified.
   
2.25 Requirement The integration design must itself be modifiable at reasonable cost.
2.25.1 Repl. The replication daemon is decoupled from both the defect tracking system and the Perforce server, so it will be straightforward to modify.
2.25.2 Single The single-database architecture is decoupled from the defect tracking system but is strongly coupled to Perforce, so adaptations to the integration may not be able to be made without changes to Perforce.
2.25.3 Union The union architecture is strongly coupled to both Perforce and the defect tracking system, so adaptations to the integration may not be able to be made without changes to both Perforce and the defect tracking system.
2.25.4 Tracker The tracker-client architecture is decoupled from Perforce but is strongly coupled to the defect tracking system, so adaptations to the integration may not be able to be made without changes to the defect tracking system.
   
2.26 Requirement The amount of effort required to port the integration to a new operating system must be less than 6 person-months, should be less than 2 person-months, may be less than 2 person-weeks, and might be one person-week, provided the tools on which the integration is based are available and stable on that operating system.
2.26.1 Repl.

The replication architecture can be built with readily available and portable tools (2.24.1) so it should be straightforward to port.

Since the replication daemon accesses the defect tracking system's APIs (which may be platform-specific) the daemon runs on the same platform as the defect tracking system.

2.26.2 Single The integration probably uses C/C++ socket and threads code which need to be ported to each platform. This is expensive.
2.26.3 Union The database unifier uses C/C++ socket and threads code which need to be ported to each new platform. This is expensive. The unifier also accesses the database API for each database that is supported. These APIs might not be available on the platform we want to port the unifier to.
2.26.4 Tracker The tracker-client architecture uses the Perforce API, which is portable.
   
2.27 Requirement The integration must be stable.
2.27.1 Repl. The replication daemon is decoupled from both the Perforce server and the defect tracking database (and uses public interfaces to both where possible) so it shouldn't need to change much.
2.27.2 Single The integration is closely coupled to the Perforce server so changes whenever the Perforce server changes.
2.27.3 Union The integration is closely coupled to the Perforce server and the defect tracker so changes whenever either changes.
2.27.4 Tracker The integration is closely coupled to the defect tracker so changes whenever the defect tracker changes.
   
2.28 Requirement Perforce must avoid making changes which break integrations by third and fourth parties.
2.28.1 Repl.

The replication architecture exposes one interface to third and fourth parties: the replication interface (between the vendor-specific replication module and the vendor-independent replication module. This interface specifies a set of entities that may be replicated and a set of events that may be responded to (for example, changes in the database). These specifications are flexible enough to allow Perforce to change the entities available for replication and change the events available for response.

The replication architecture commits Perforce to maintaining certain fields of entities (especially jobs, changelists, fixes) with particular meanings. However, it can add new fields to these entities if it wants to - not all fields are replicated.

2.28.2 Single

The single-database architecture exposes one interface to third and fourth parties: the Perforce defect tracking database interface. This interface specifies a set of entities that Perforce may want to access and store from a foreign database. Perforce may want to change this set of entities, so the interface provides a way for this set of entities to change. So that the integration does not break when the set of entities changes, the Perforce defect tracking database module is able to say "no, I can't access or store that entity for you". In that case Perforce accesses ands store the entity in its own database as in the non-integrated case.

If Perforce changes the entity definitions of the entities stored in the defect tracker's database then the integration will break (since the database schema is now wrong). It therefore provides an update kit to update the schema for each defect tracking database each time it changes the definition of the entities.

2.28.3 Union The union architecture exposes one interface to third and fourth parties: the module that translates entities from one system to the other. The same comments apply to this interface as for the similar interface in the replication architecture.
2.28.4 Tracker

The tracker-client architecture exposes the standard Perforce client interface. This commits Perforce to supporting a large set of operations on the server.

I don't know if this interface is flexible enough to support the addition of new fields to entities without breaking existing clients.

Note that Perforce is probably already committed to breaking this interface as little as possible because of the large number of installed Perforce clients that would need to be replaced.

   
2.29 Requirement Perforce must keep the cost of changes to the integration small for integrations by third and fourth parties.
2.29.1 Repl. The replication daemon design outlined in 2.21.1 has well-defined interfaces between the components that are changed for each integration and the components that stay the same. As long as the interfaces are respected, it should be possible for third parties to easily rebuild their integrations to keep up with Perforce's changes to the integration kit. If the replication daemon is implemented using a tool like Python, theres should not be any need to rebuild third-party integrations.
2.29.2 Single An interface internal to the Perforce server is kept up to date so each time there is a new release of the integration kit people install a new Perforce server. Also, since the interfaces in the integration use C/C++ all integrations are rebuilt whenever there is a new release of the integration kit.
2.29.3 Union Since the interfaces in the database unifier use C/C++ all integrations are rebuilt whenever there is a new release of the integration kit.
2.29.4 Tracker The Perforce API is adaptable and extensible without requiring changes to the clients that use it.
   
2.30 Requirement The integration must be maintained.
Analysis See 2.27.
   
2.31 Requirement Perforce must change the integration so that it continues to work with new releases of other Perforce software.
Analysis See 2.27.
   
2.32 Requirement Perforce must change the integration so that it continues to work with new releases of supported defect tracking systems.
Analysis See 2.27.
   
2.33 Requirement The integration must be supported.
Analysis See 2.35.
   
2.34 Requirement Perforce support must be able to answer questions about integrating Perforce with defect tracking systems.
Analysis See 2.35.
   
2.35 Requirement The integration must be supportable at reasonable cost.
2.35.1 Repl. The integration is straightforward to install (2.63.1) and straightforward to port to new defect tracking systems (2.21.1). Support costs should be reasonable, provided that the integration is well documented (2.22.1).
2.35.2 Single The integration is moderately difficult to install (2.63.2), but easy to port to new defect tracking systems (2.21.2). Support costs should be reasonable, provided that the integration is well documented (2.22.2).
2.35.3 Union Supporting people trying to integrate with other defect trackers is likely to prove expensive. The integration is very complex, requiring a lot of database expertise to understand and modify (2.1.3, 2.21.3). The integration is difficult to install and has a wide impact on the organization's tools that query the defect tracking database (2.63.3). All these factors means that good documentation will be difficult to prepare (2.22.3) and support calls are likely to be numerous and time-consuming to answer.
2.35.4 Tracker The main support cost is likely to be preparing good documentation for the Perforce API (2.22.4). Other support costs fall to the defect tracking vendors, since the integration is part of the defect tracker.
   
2.36 Requirement The integration project may deliver the ability to drive Perforce from the defect tracking system early.
2.36.1 Repl. This is done using the tracker-client architecture (2.36.4).
2.36.2 Single This is done using the tracker-client architecture (2.36.4).
2.36.3 Union This is done using the tracker-client architecture (2.36.4).
2.36.4 Tracker The tracker-client architecture achieves this, but the Perforce API is documented (2.22.4).
   
2.37 Requirement The integration should allow routine defect resolution tasks from the defect tracking system alone without the converse requirement.
2.37.1 Repl. This is done using the tracker-client architecture, because that's the only way to check files in and out using the defect tracker's interface (2.37.4).
2.37.2 Single See 2.37.1.
2.37.3 Union See 2.37.1.
2.37.4 Tracker The defect tracking vendors provide interfaces to Perforce changelists and fixes (2.39.4, 2.44.4).
   
2.38 Requirement Developers must be able to carry out routine defect resolution tasks using only the defect tracker's interface.
Analysis See 2.37.
   
2.39 Requirement Developers must be able to get copies of, check out, etc. files associated with a task using the defect tracker's interface.
2.39.1 Repl. This is done using the tracker-client architecture, because that's the only way to check files in and out using the defect tracker's interface (2.39.4).
2.39.2 Single See 2.39.1.
2.39.3 Union See 2.39.1.
2.39.4 Tracker The defect tracking system supports a relation between tasks and filespecs in Perforce. It provides an interface allowing developers to check out files associated with a task.
   
2.40 Requirement The defect tracker displays the list of files associated with a task and allow actions on (groups of) those files.
Analysis See 2.39.
   
2.41 Requirement Developers must be able to carry out routine defect resolution tasks using only the Perforce interface.
2.41.1 Repl. The developer uses Perforce's jobs interface, and changes to jobs are replicated to the defect tracker's database.
2.41.2 Single The developer uses Perforce's jobs interface, and Perforce jobs are simply a view onto the defect tracker's database.
2.41.3 Union The developer uses Perforce's jobs interface, and Perforce jobs are simply a view onto the defect tracker's database.
2.41.4 Tracker The tracker-client architecture cannot meet this requirement because it provides no mechanism for accessing or changing defect records from the Perforce interface.
   
2.42 Requirement Developers must be able to see the tasks assigned to them through the Perforce interface.
2.42.1 Repl. Since Perforce jobs are replicated to and from defect records, developers can do this using the "p4 jobs" command or its equivalent in their Perforce client.
2.42.2 Single Since Perforce jobs are a view onto defect records, developers can do this using the "p4 jobs" command or its equivalent in their Perforce client.
2.42.3. Union Since Perforce jobs are a view onto defect records, developers can do this using the "p4 jobs" command or its equivalent in their Perforce client.
2.42.4 Tracker The tracker-client architecture cannot meet this requirement because it provides no mechanism for accessing defect records from the Perforce interface.
   
2.43 Requirement Developers may be able to check out the files associated with a task easily using the Perforce interface.
2.43.1 Repl. Perforce supports a relation between job and file (or job and filespec). The Perforce client provides ways of associating files or filespecs with a job, querying this relation, and checking out (perhaps by an option to "p4 sync" and "p4 edit") the files or filespecs associated with a job. Alternatively, utilities could be provided to check out the files associated with a task. This is less satisfactory, because the utility has to be ported to each platform where the Perforce client runs.
2.43.2 Single See 2.43.1.
2.43.3 Union See 2.43.1.
2.43.4 Tracker The tracker-client architecture cannot meet this requirement because it provides no mechanism for accessing task information from the Perforce interface.
   
2.44 Requirement Developers must be able to associate the changes they made in order to complete a task with that task.
2.44.1 Repl. The association can be stored in Perforce's fixes relation.
2.44.2 Single See 2.44.1.
2.44.3 Union See 2.44.1.
2.44.4 Tracker

See 2.44.1.

The defect tracking vendor provides an interface to Perforce changlists and the fixes relation.

   
2.45 Requirement The integration must allow association of changes with tasks at submit time.
Analysis See 2.44.
   
2.46 Requirement The integration must allow association of changes with tasks after submit time.
Analysis See 2.44.
   
2.47 Requirement The integration must allow changes to be associated with more than one task.
Analysis See 2.44.
   
2.48 Requirement The integration must allow tasks to be associated with more than one change.
Analysis See 2.44.
   
2.49 Requirement The integration may allow recording of the impact of the change on the task (e.g. whether it "fixes" it or what). A changelist might only be an analysis, or a back out, or some other thing which doesn't resolve the issue.
2.49.1 Repl. Another field is added to the Perforce fixes relation that records the nature of the fix. The contents of this field are definable by the users of the system, since each organization will have its own idea of what is recorded here. The Perforce API is extended to deal with this new field, and the Perforce clients are extended to support it.
2.49.2 Single See 2.49.1.
2.49.3 Union See 2.49.1.
2.49.4 Tracker See 2.49.1.
   
2.50 Requirement Developers should be able to identify the tasks that they are working on.
2.50.1 Repl. Tasks are replicated to and from Perforce jobs, so developers can use "p4 jobs" or the equivalent feature in the defect tracking interface.
2.50.2 Single Perforce jobs are a view onto defect tracking tasks, so developers can use "p4 jobs" or the equivalent feature in the defect tracking interface.
2.50.3 Union Perforce jobs are a view onto defect tracking tasks, so developers can use "p4 jobs" or the equivalent feature in the defect tracking interface.
2.50.4 Tracker Developers use the defect tracking interface.
   
2.51 Requirement Developers should be able to find out why (in terms of tasks) they have files checked out.
2.51.1 Repl.

We can use pending changelists, associated with jobs through the fixes relation, to indicate why files are checked out.

We can record the nature of this association in the "fixes" relation (2.49.1).

2.51.2 Single See 2.51.1.
2.51.3 Union See 2.51.1.
2.51.4 Tracker The tracker-client architecture can't support this.
   
2.52 Requirement Developers may be able to find out where the partially completed work on a task is.
Analysis See 2.51.
   
2.53 Requirement Developers might be able to branch and merge the files associated with a task.
2.53.1 Repl.

We can record the association between the files and the task in order to branch or merge them. This implies a new relation in the Perforce database between jobs and files or filespecs (2.43.1).

The Perforce API is extended to allow this relation to be queried and changed, and Perforce support it.

Alternatively, we provide separate utilities to branch and mere the files associated with a task. This is less satisfactory, because the utilities have to be ported to each platform where the Perforce client runs.

2.53.2 Single See 2.53.1.
2.53.3 Union See 2.53.1.
2.53.4 Tracker

See 2.53.1.

The defect tracking vendors add interfaces to the relation between tasks and files allowing branching and merging.

   
2.54 Requirement The analyst must be able to associate (groups of) files with the task.
Analysis See 2.53.
   
2.55 Requirement The analyst might be able to record the nature of the association.
2.55.1 Repl. This requires a field in the new relation between jobs and files or filespecs that records the nature of the association. The contents of this field are definable by the users of the system, since each organization will have its own idea of what is recorded here. The Perforce API is extended to deal with this new field, and the Perforce clients are extended to support it.
2.55.2 Single See 2.55.1.
2.55.3 Union See 2.55.1.
2.55.4 Tracker

See 2.55.1.

The defect tracking vendors add an interface to the relation between tasks and files. The interface records, queries and updates the nature of the association.

   
2.56 Requirement The manager may prevent unapproved changes to (groups of) files.
2.56.1 Repl.

All the permissions requirements can be met in the replication architecture by replicating users, groups and permissions between the defect tracking system and Perforce. The idea is to apply some kind of (configurable?) policy to defect tracking system permissions or assigned users or whatever to get Perforce permissions covering files.

Note that Perforce doesn't have permissions covering jobs, so a crafty developer may be able to get around any restrictions by editing the permissions on some associated job (or just creating an appropriate job giving him the permissions he wants). This could be solved by changing Perforce so that its permissions cover jobs as well as files.

The replication daemon masquerades as the appropriate user when making changes or additions to the Perforce database and the defect tracking database. Both systems provides public interfaces allowing the daemon to masquerade as a particular user.

2.56.2 Single

The Perforce server stores its users and groups in the defect tracking database. When a user tries to do something to a file, the Perforce server reads any relevant defect records and applies some kind of policy to determine if the user is successful. This implies another public interface in the Perforce server which it can use to get at the policy. This makes the integration less adaptable and supportable.

The Perforce server masquerades as the appropriate user when making changes or additions to the defect tracking database. The defect tracking system provides a public interface allowing Perforce to masquerade as a particular user.

2.56.3 Union The Perforce server is changed for each defect tracker in order to read appropriate permissions out of the defect tracker's database.
2.56.4 Tracker Not possible. Managers (using the defect tracking system) have no control over what Perforce users do using the Perforce interface.
   
2.57 Requirement The manager may limit users' ability to change files to those associated with issues assigned to them.
Analysis See 2.56.
   
2.58 Requirement The manager may limit users' ability to merge changes into a codeline until the change is approved.
Analysis See 2.56.
   
2.59 Requirement The manager might be able to limit users' ability to access (at all) files to those associated with issues assigned to them.
Analysis See 2.56.
   
2.60 Requirement The SQA group must be able to produce (often for the customer) a report showing that the defect tracker's idea of the status of issues affecting a release matches Perforce's idea of the changes that have been made for that release, highlighting any discrepancy.
2.60.1 Repl. If all relevant information is replicated from the Perforce database to the defect tracking database (that is, changelists and fixes as well as jobs; see 2.5.1) then such a report should be straightforward to produce by making SQL queries of the defect tracker's database (2.5.1).
2.60.2 Single It is not straightforward to produce such a report, because it involves querying both Perforce and the defect tracker's database and writing a program to combine the results (2.5.2).
2.60.3 Union Such a report should be straightforward to produce by making SQL queries of the database unifier (2.5.3).
2.60.4 Tracker Such a report is hard to produce since it involves querying both Perforce and the defect tracker's database and writing a program to combine the results. It is likely that such a report will highlight many errors, since the defect tracker's database is not necessarily consistent with the Perforce database (2.1.1).
   
2.61 Requirement The integration must not require administrators to be more than, say, a basic Perl or Python hacker and an experienced Perforce administrator, should not require more than one year of Perforce administration experience, may not require them to be more than ordinary Perforce administrators, and might not require them to be more than people who've just downloaded Perforce and started it.
2.61.1 Repl. The replication architecture may require the administrator to resolve inconsistencies occasionally, which may involve making careful and correct changes to the Perforce database. We can provide tools to simplify this, but the administrator needs to be experienced with Perforce and with the defect tracking system.
2.61.2 Single Reasonable experience is required to install the integration (2.63.2), otherwise little experience is required.
2.61.3 Union The administrator needs to be a database administrator for the defect tracking database.
2.61.4 Tracker Administering the tracker-client architecture requires no more experience than administering Perforce at the moment.
   
2.62 Requirement The integration must be installable by the administrator.
Analysis See 2.63.
   
2.63 Requirement Installing the integration for a supported defect tracking system must take less than 1 person-week, should be less than 2 person-day, may be less than 1 person-day, and might be less than 3 person-hours.
2.63.1 Repl. Installation of the replication architecture is straightforward - the administrator starts the daemon on the appropriate server machine, turns on some triggers in Perforce (we can supply scripts that do this), and supply some configuration information about hosts and ports. We can arrange that the daemon does the rest. There is no need to stop the Perforce server or the defect tracking server. It can probably be done in less than 3 person-hours.
2.63.2 Single The Perforce server is reinstalled. Some decision is made about any jobs data currently in Perforce: should it be moved to the defect tracking database or deleted (2.95.2)? Can probably be done in less than a person-day.
2.63.3 Union Every tool in the organization that queries the defect tracking database is altered to query the database unifier instead. This may mean re-configuring a very large number of applications. We can't make any guarantees about how long this will take.
2.63.4 Tracker This will take no additional time to the time taken to install the defect tracking system.
   
2.64 Requirement Removing the integration from a supported defect tracking system must take less than 1 person-week, should be less than 2 person-day, may be less than 1 person-day, and might be less than 3 person-hours.
2.64.1 Repl. All the administrator has to do is stop the replication daemon. This will take less than 3 person-hours.
2.64.2 Single If the job data is not required in Perforce, then this may be as simple as re-configuring Perforce to use its own database for jobs rather than the defect tracker's database. However, if jobs are needed in Perforce, then the jobs are transferred from the defect tracker's database to Perforce. We can supply a tool to make this easier, but it's still going to be a pain, especially when there are multiple Perforce servers or multiple defect tracking servers. However, it should be possible to uninstall in less than one person-week.
2.64.3 Union Every tool in the organization that queries the database unifier database is altered to query the defect tracking database again. This may mean re-configuring a very large number of applications. We can't make any guarantees about how long this will take.
2.64.4 Tracker This is a matter of re-configuring each defect tracking client so that it doesn't connect to the Perforce server to do software configuration management. It's up to the vendors to make this possible and simple. It may take a long time to re-configure each defect tracking client in a large organization.
   
2.65 Requirement The integration should only make additions to the Perforce state and the defect tracker state (i.e. it should not delete information).
Analysis Not analysed, because it's a solution to requirement 64.
   
2.66 Requirement The changes that are made by installing the integration must be clearly documented.
2.66.1 Repl. This is straightforward. Some fields may be added to Perforce relations (2.1.1) and the demon is started.
2.66.2 Single This can be done. Some data is diverted from Perforce to the defect tracking database and the Perforce server is changed.
2.66.3 Union This is likely to be hard. Some data is diverted from Perforce to the defect tracking database. The Perforce server is changed. But many other tools in the organization need to be reconfigured (2.63.3).
2.66.4 Tracker The installation doesn't make any changes.
   
2.67 Requirement The integration must log changes that it makes so that they can be undone in an emergency.
Analysis Not analysed, because it's a solution to requirement 64.
   
2.68 Requirement Administrators should be able to develop new queries of the defect tracker's database and the relationship between issues, tasks, changelists, revisions, and files.
Analysis See 2.5.
   
2.69 Requirement The integration must not require users of the defect tracker's interface to Perforce to be more than reasonably competent with Perforce, should not require more than Perforce basics, may not require than basic familiarity with SCM concepts (e.g. a VSS user), and might not require any previous SCM experience.
2.69.1 Repl. See 2.69.4.
2.69.2 Single See 2.69.4.
2.69.3 Union See 2.69.4.
2.69.4 Tracker Users of the defect tracker's interface to Perforce need to know about Perforce changelists. They don't need to know about Perforce jobs since they can just use defect tracker's defect records instead, so knowledge of Perforce basics should be fine.
   
2.70 Requirement The integration must not require users of the Perforce interface to the defect tracker to be more than reasonably competent with jobs, shouldn't require more than basic familiarity with jobs, may not require any more than reasonable competence with Perforce, and might not require more than Perforce basics (for companies introducing both Perforce and defect tracking at once).
2.70.1 Repl. The Perforce interface to the defect tracker is essentially the Perforce jobs interface. Users therefore need to be reasonably competent with jobs.
2.70.2 Single See 2.70.1.
2.70.3 Union See 2.70.1.
2.70.4 Tracker There is no Perforce interface to the defect tracker.
   
2.71 Requirement Users must be able to find out which tasks have affected a file's or group of files' development using either the defect tracker's interface or the Perforce interface.
2.71.1 Repl. For the Perforce interface this is straightforward; the tasks affecting files' development are available through "p4 fixes". For the defect tracker's interface, the defect tracking vendor provides an interface to the fixes relation.
2.71.2 Single See 2.71.1.
2.71.3 Union See 2.71.1.
2.71.4 Tracker There is no way to find out accurately what tasks have affected a file's development, since files are represented in Perforce's database and tasks in the defect tracker's database and the two are generally not consistent (2.1.4).
   
2.72 Requirement Users must be able to find out why a changelist was made in terms of tasks from either the defect tracker's interface or the Perforce interface.
2.72.1 Repl. For the Perforce interface this is straightforward; the task for which a changelist was made is available through "p4 fixes". For the defect tracker's interface, the defect tracking vendor provides an interface to the changelist and fixes relations.
2.72.2 Single See 2.72.1.
2.72.3 Union See 2.72.1.
2.72.4 Tracker There is no way to find out accurately what tasks have caused a changelist to be made, since changelists are represented in Perforce's database and tasks in the defect tracker's database and the two are generally not consistent (2.1.4).
   
2.73 Requirement Users must be able to find out which changelists were made for a task from either the defect tracker's interface or the Perforce interface.
Analysis See 2.72.
   
2.74 Requirement Users must be able to find out which files are associated with an issue from either the defect tracker's interface or the Perforce interface.
2.74.1 Repl. There is a relation between issues and files that is replicated between Perforce and the defect tracking system (2.39.1, 2.43.1).
2.74.2 Single There is a relation between issues and files. The defect tracking system and Perforce server can query this relation. This implies that the relation is stored in the defect tracking system's database.
2.74.3 Union There is a relation between issues and files. The defect tracking system and Perforce server can query this relation. The relation may be stored either in the defect tracking system's database or in the Perforce database.
2.74.4 Tracker This can't be done with the tracker-client architecture. There is a relation between issues and files. This is stored in the defect tracking system's database (since it refers to issues, which can't be referred to by Perforce) but that means that it can't be accessed from Perforce.
   
2.75 Requirement Users must like the integration as much as a slap round the face with a wet fish, should like it as much as the defect tracking systems, may like the integration as much as Perforce, and might like it as much as a cup of really good coffee.
2.75.1 Repl. If the replication daemon works well then developers who use Perforce can continue to use Perforce and are hardly aware of the defect tracking system at all; and vice versa for users of the defect tracking system. So the former may like it as much as Perforce and the latter as much as the defect tracking system.
2.75.2 Single Installation will be tricky (2.63.2) and Perforce will be slowed down by having to connect to the defect tracker's database for any job-related query (2.1.2). So the integration won't be liked as well as Perforce.
2.75.3 Union In my opinion installation will be such a pain (2.63.3) and any application that uses the defect tracker's database (including Perforce) will be slowed down so much (2.1.3), that I would prefer the wet fish.
2.75.4 Tracker The integration doesn't allow Perforce to be used for routine defect tracking activity. So it won't be liked as well as Perforce.
   
2.76 Requirement The project must require less than six weeks of Christopher Seiwald's time, should require less than four weeks, may require less than two weeks, and might require only a few days.
2.76.1 Repl.

The replication architecture requires the following from Chris:

  1. Changes to the Perforce schema (Job/Filespec relation, addition to the fixes relation of a field indicating the meaning of the fix). Estimated effort: unknown, possibly 4 weeks?

  2. Changes to the Perforce client API and to the various Perforce clients to support these schema changes. In particular, the procedures for opening files should allow the open files to be associated with one or more jobs (perhaps via pending changelists?) and the submit procedure should be changed so that it does not close open jobs by default, but rather allows the developer to specify the relation between the changelist and each job (contributed/fixed/backed out/other user-defined meanings). Estimated effort: unknown, possibly 4 weeks?

  3. Triggers on Perforce operations that change the Perforce database. Estimated effort: unknown, possibly 4 weeks? (Many people have asked for pre- and post- triggers on all Perforce operations, so even though the defect tracking integration does not require so many triggers, it would be a good idea (and perhaps not significantly more work) to add them all.)

We are assuming here that Chris is the only developer at Perforce who can do this work. It may be the case that he can delegate items 2 and 3 (since they don't affect Perforce's database implementation) and review the work, but we are not depending on that.

Total estimated effort: about 12 weeks.

2.76.2 Single

The single-database architecture requires the following from Chris:

  1. See 2.76.1. 4 weeks.

  2. See 2.76.1. 4 weeks.

  3. See 2.76.1. 4 weeks.

  4. Definition of an interface by which the Perforce server can access its job and fixes-related data when that data is stored in third-party databases (2.21.2). Estimated effort: unknown, possibly 4 weeks.

  5. Rewriting the server so that it can be configured to use this interface instead of its usual database back end. Note the need for a two-phase commit protocol to prevent inconsistency between the two databases. Estimated effort: unknown, possibly 8 weeks.

We are assuming here that Chris is the only developer at Perforce who can do this work (2.76.1).

Total estimated effort: about 24 weeks.

2.76.3 Union

The union architecture requires the following from Chris:

  1. See 2.76.1. 4 weeks.

  2. See 2.76.1. 4 weeks.

  3. See 2.76.1. 4 weeks.

  4. Definition of an interface by which the Perforce server can access some or all job- and fix-related data via the database unifier (2.21.3). Estimated effort: unknown, possibly 4 weeks.

  5. Rewriting the server so that it can be configured to use this interface instead of its usual database back end. Estimated effort: unknown, possibly 8 weeks.

  6. Definition of an interface by which the database unifier can access the Perforce database (2.21.3). Estimated effort: unknown, possibly 2 weeks.

  7. Rewriting the server so that the database is accessible via this interface. Estimated effort: unknown, possibly 4 weeks.

We are assuming here that Chris is the only developer at Perforce who can do this work (2.76.1).

Total estimated effort: about 30 weeks.

2.76.4 Tracker The tracker-client architecture requires no effort from Chris.
   
2.77 Requirement The project must require less than 90% of Gareth Rees' time, should require less than 75%, may require less than 50%, and might require none. Gareth has other duties for Ravenbrook requiring at least 10%, would like to be able to broaden his client base.
2.77.1 Repl.

I deduce from this requirement, together with requirement 78 and the schedule [RB 2000-02-02, 9], that the total effort for the project from Gareth and Richard is no more than 48 weeks and should be no more than 32 weeks.

Architecture definition and review, planning, communication with participants in alpha and beta programmes and management, tracking and oversight will take about 28 weeks of effort, as estimated in [RB 2000-02-02, 9].

In addition, the replication architecture requires the following components:

  1. Definition of changes to the Perforce schema (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks.

  2. Definition of changes to the Perforce client API and improvements to API documentation (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks.

  3. Definition of trigger events and protocol (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks

  4. Definition of interface between vendor-independent and vendor-specific replication modules. Estimated effort: 2 weeks.

  5. Definition of interface and protocol between defect tracker's database triggers and replication daemon. Estimated effort: 4 weeks.

  6. Development and test of vendor-independent replication module. Estimated effort: 8 weeks.

  7. Extending the integration to work with two defect tracking systems. Estimated effort: 18 weeks to 44 weeks (2.21.1; note that we don't have to learn about the integration project, but we have to learn about the defect tracker).

Total estimated effort: from about 66 to about 92 weeks.

2.77.2 Single

Requirements for effort and estimation of general project effort are as for the replication architecture.

In addition, the single-database architecture requires the following components:

  1. Definition of changes to the Perforce schema (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks.

  2. Definition of changes to the Perforce client API and improvements to API documentation (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks.

  3. Definition of an interface by which the Perforce server can access its job and fixes-related data when that data is stored in third-party databases (2.21.2. 2.76.2). The implementation will probably be by Perforce but should be reviewed. Estimated effort: 2 weeks.

  4. Implementing and testing vendor-independent parts of Perforce server defect tracking database interface. Estimated effort: 8 weeks.

  5. Extending the integration to work with two defect tracking systems. Estimated effort: 12 weeks (2.21.2 and the note for the replication architecture).

Total estimated effort: about 52 weeks.

2.77.3 Union

Requirements for effort and estimation of general project effort are as for the replication architecture.

In addition, the union architecture requires the following components:

  1. Definition of changes to the Perforce schema (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks.

  2. Definition of changes to the Perforce client API and improvements to API documentation (implementation probably by Perforce but should be reviewed). Estimated effort; 2 weeks.

  3. Definition of an interface by which the Perforce server can access its job and fixes-related data via the database unifier (2.21.3, 2.76.3) (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks.

  4. Definition of an interface by which the defect tracking vendor independent parts of the database unifier can talk to the defect tracking vendor specific parts of the database unifier. Estimated effort: 2 weeks.

  5. Design, implementation and test of database unifier. Estimated effort: unknown, probably at least 16 weeks, perhaps as much as 32 weeks.

  6. Extending the integration to work with two defect tracking systems. Estimated effort: 40 weeks to 110 weeks (2.21.3). The minimum effort for this stage is more than that implied by 2.21.3 because the first integration will necessarily be a difficult case involving interfacing with a new database protocol; the second integration may be easier.

Total estimated effort: 92 to 178 weeks.

2.77.4 Tracker

Requirements for effort are as for the replication architecture. The general project effort will be lower than estimated if the tracker-client architecture is the only architecture implemented. Say half the estimated costs for the replication architecture, that is, 14 weeks.

The tracker-client architecture requires the following components:

  1. Definition of changes to the Perforce schema (implementation probably by Perforce but should be reviewed). Estimated effort: 2 weeks.

  2. Definition of changes to the Perforce client API and improvements to API documentation . The implementation will probably be by Perforce but should be reviewed. Estimated effort: 2 weeks.

  3. Extending the integration to defect tracking systems is done by the defect tracking vendor (2.21.4) but we could do the integration for an open source defect tracking system like Bugzilla. Estimated effort: 16 weeks.

Total estimated effort: 34 weeks if tracker-client architecture implemented for Bugzilla.

   
2.78 Requirement The project must require less than 60% of Richard Brooksby's time, should require less than 30%, may require less than 25%, and might require 10%. Richard has other clients, and would like to put time into developing Ravenbrook projects and relationships, but will definitely at least need to review work done for the project.
Analysis See 2.77.
   
2.79 Requirement The project must cost less than some amount to be decided by Perforce, should cost less than $300K, may cost less than $150K, might cost as little as $100K.
Analysis See 2.77.
   
2.80 Requirement The integration project should use Perforce as its SCM tool.
Analysis It does so already.
   
2.81 Requirement The integration project should use open languages and tools. For example, we shouldn't require people to buy Visual Basic in order to integrate, but we might ask them to have Python or something similar.
Analysis See 2.24.
   
2.82 Requirement The integration project may use Python as its implementation language. This appears to be a generally favoured language of Perforce customers. I gather this from Perforce's own use of Python and discussions at the Perforce User Conference 1999.
2.82.1 Repl. This can be done (2.24.1), but we'll need to provide foreign function interfaces for the Perforce API and for the defect tracking interface (which is C++ in the case of tTrack.
2.82.2 Single See 2.24.2.
2.82.3 Union See 2.24.3.
2.82.4 Tracker See 2.24.4.
   
2.83 Requirement The integration project might use Perl as its implementation language. It's less favoured than Python.
Analysis See 2.82.
   
2.84 Requirement The project must increase goodwill toward Perforce from their customers.
Analysis See 2.74.
   
2.85 Requirement The project progress and status should be open to Perforce customers.
Analysis The web site at <URL: http://www.ravenbrook.com/project/p4dti/> provides up-to-date information about the project.
   
2.86 Requirement The project should be responsive to customer queries.
Analysis The web site at <URL: http://www.ravenbrook.com/project/p4dti/> provides up-to-date information about the project. The mailing aliases p4dti-questions@ravenbrook.com and p4dti-comments@ravenbrook.com go to project staff who answer questions not answered on the web site.
   
2.87 Requirement The project must support customers trying to integrate with supported defect trackers.
Analysis See 2.35.
   
2.88 Requirement The project should support customers trying to integrate with other defect trackers using the kit.
Analysis See 2.35.
   
2.89 Requirement The project should develop goodwill from defect tracking vendors toward Perforce.
2.89.1 Repl.

Defect tracking vendors need to make a public API to their entities and document it; to provide a way for the replication daemon to be triggered when changes are made to defect tracking entities (2.1.1); to provide a way for the replication daemon to masquerade as any user of the system (2.56.1); to implement the tracker-client architecture (2.89.4) though this is not critical.

This gives supported defect tracking vendors a feature to add to their marketing literature for a reasonable cost. This will generate goodwill.

Unsupported defect tracking vendors need additionally to port the integration (2.21.1) or to fund the porting. The total cost will still be reasonable, and will generate goodwill.

2.89.2 Single

Defect tracking vendors need to make a public API to their entities and document it; to provide a way for Perforce to masquerade as any user of the system (2.56.2); to implement the tracker-client architecture (2.89.4) though this is not critical. This gives supported defect tracking vendors a feature to add to their marketing literature for a reasonable cost.

This gives supported defect tracking vendors a feature to add to their marketing literature for a reasonable cost. This will generate goodwill.

Unsupported defect tracking vendors need additionally to port the integration (2.21.2) or to fund the porting. The total cost will still be reasonable, and will generate goodwill.

2.89.3 Union The union architecture is likely to be unpopular with customers of defect tracking systems (2.75.3), not to mention very hard to port to new defect tracking systems (2.21.2). This probably generates ill-will rather than goodwill.
2.89.4 Tracker A defect tracking vendor needs to do a fair amount of work to implement the tracker-client architecture. In addition to the basic source control features (2.21.4), they need to provide an interface to changelists and fixes (2.37.4, 2.39.4, 2.44.4, 2.71.4, 2.72.4), an interface to a relation between jobs and filespecs (2.55.4) and an interface allowing users to branch and merge files associated with a task (2.53.4). Perforce helps the defect tracking vendors by documenting the Perforce API (2.22.4). Probably little or no goodwill generated.
   
2.90 Requirement The project should develop goodwill from supported defect tracking system vendors.
Analysis See 2.89.
   
2.91 Requirement The project may develop goodwill from other defect tracking vendors.
Analysis See 2.89.
   
2.92 Requirement Testers must be able to find out what exactly has changed for a release or because of an issue for white box testing.
2.92.1 Repl.

The tester can find out what has changed for an issue by querying the fixes relation for that issue and looking at the contents of the associated changelists. The tester can find what has changes for a release by looking at the change history of the files involved in the release.

The tester can't be confident that the results of these queries are correct because of potential inconsistencies between the defect tracker's database and the Perforce database (2.1.1). The tester can produce a report certifying consistency at the time when the query was issued (2.60.1).

2.92.2 Single The tester can find out what has changed for an issue by querying the fixes relation for that issue and looking at the contents of the associated changelists. The tester can find what has changes for a release by looking at the change history of the files involved in the release.
2.92.3 Union See 2.92.1.
2.92.4 Tracker The tester can't do this reliably because the defect tracking database is not consistent with the Perforce database (2.1.4).
   
2.93 Requirement Testers must be able to manage test suite stuff and changes to that.
Analysis

They can do this in the same way as developers manage any other source code or documents and changes to those.

   
2.94 Requirement Testers must be able to associate regression test sources and stuff with issues.
Analysis

They can do this through the proposed job/filespec relation (2.43.1), especially if they can record the nature of the association (2.55.1).

   
2.95 Requirement Customers may be able to migrate from their existing defect tracking systems to a Perforce integrated defect tracker.
2.95.1 Repl.

There are four cases to consider: 1. migrating from using Perforce jobs to using the integration; 2. migrating from using defect tracking system A to using the integration with that defect tracking system; 3. migrating from using supported defect tracking system A to using the integration with supported defect tracking system B; 4. migrating from using unsupported defect tracking system A to using the integration with supported defect tracking system B.

  1. The administrator can configure and turn on the replication daemon, which replicates the Perforce jobs to the defect tracking system.

  2. The administrator can configure and turn on the replication daemon, which replicates defects from the defect tracking system to Perforce jobs.

  3. If defect tracking system A is supported by the integration, then the administrator can configure the replication daemon for defect tracking system A, turn on the replication daemon, which will replicate the defects to Perforce. Then the replication daemon can be stopped and reconfigured for defect tracking system B. When it is turned on it will replicate the defects from Perforce to defect tracking system B.

  4. Defects are exported from defect tracking system A and imported into either defect tracking system B or into Perforce. The administrator writes software for this, or does it by hand.

2.95.2 Single

Considering the same four cases as the replication architecture (2.95.1):

  1. Jobs are copied from Perforce to the defect tracking system. We may be able to provide a utility to do this, or, better, a component of the Perforce server (since it can take advantage of the Perforce to defect tracker conversion routines included in the integration, which are in the Perforce server).

  2. This is straightforward. After installation of the integration, Perforce just reads the data from the defect tracker's database.

  3. Defects are exported from defect tracking system A and imported into either defect tracking system B or into Perforce. We may be able to provide a utility for this, or else the administrator writes software for this, or does it by hand.

  4. Defects are exported from defect tracking system A and imported into either defect tracking system B or into Perforce. The administrator writes software for this, or does it by hand.

2.95.3 Union

Considering the same four cases as the replication architecture (2.95.1):

In all four cases, some data is copied from one database to another, as for the union architecture (2.95.2), and even in case 2 the union architecture may require that some data that was in the defect tracker's database is copied to the Perforce database.

2.95.4 Tracker

Considering the same four cases as the replication architecture (2.95.1):

Case 2 is straightforward (just install the integration and start work). The other cases involve copying defect records from one database to another, as for the union architecture (2.95.2).

   
2.96 Requirement The integration should cope with multiple Perforce servers. (This means that the integration should cope with organizations where a defect tracking system track defects for projects whose configurations are managed on multiple Perforce servers.)
2.96.1 Repl.

Each defect record in the defect tracker has a field indicating which Perforce server manages any associated sources. The replication daemon connects to all the Perforce servers and replicates each defect to the appropriate server.

For defect tracking systems which have a concept of "project", the project record can indicate which Perforce server manages the sources for that project, and the field in the defect record can be initialised accordingly. For other defect tracking systems someone must fill in the value of this field when the defect record is created.

2.96.2 Single Each defect record in the defect tracker has a field indicating which Perforce servers manage any associated sources. When a Perforce server queries the defect tracking database it only looks at defect records which have a matching entry in this field. But see also 2.96.1.
2.96.3 Union I can't see how to do this for the union architecture.
2.96.4 Tracker Each defect record in the defect tracker has a field indicating which Perforce server manages any associated sources. When the user is doing source control tasks for a particular defect, the defect tracking client connects to the appropriate Perforce server.
   
2.97 Requirement The integration may cope with multiple defect tracking systems. (This means that the integration should cope with organizations where a Perforce server manages configurations for projects whose defects are tracked on multiple defect tracking systems.)
2.97.1 Repl. Each job record in Perforce has an extra field indicating which defect tracking system it should be replicated to. There are multiple replication daemons, one for each defect tracking system, because the replication daemon runs on the same platform as the defect tracking system.
2.97.2 Single The Perforce server has to have a way of identifying each defect record. But because they are stored in multiple defect tracking systems they don't necessarily have any unique identifier. So the Perforce server will have to keep a record of each defect in all the defect tracking systems so that it can assign unique identifiers to them. This means that the Perforce server is doing most of what the replication architecture would do, but in a much more complicated way, since in the single-database architecture the Perforce server itself has to talk to all the defect tracking interfaces whereas in the replication architecture the Perforce server is largely unchanged.
2.97.3 Union I can't see how to do this for the union architecture.
2.97.4 Tracker No problem; each defect tracking system connects to the Perforce server.
   
2.98 Requirement The integration might cope with organizations that have a many-to-many relationship between their Perforce servers and defect tracking systems.
2.98.1 Repl. Possibly, by combining the multiple Perforce servers solution (2.96.1) with the multiple defect tracking systems solution (2.97.1).
2.98.2 Single The single-database architecture can't cope with multiple defect tracking systems and one Perforce server (2.97.2), so it certainly can't cope with a many-to-many relationship.
2.98.3 Union I can't see how to make the union architecture cope with multiple Perforce server (2.96.3) or multiple defect tracking systems (2.97.3) so it certainly can't cope with a many-to-many relationship.
2.98.4 Tracker Possibly, by combining the multiple Perforce servers solution (2.96.4) with the multiple defect tracking systems solution (2.97.4).
   
2.99 Requirement The set of Perforce server platforms supported by the integration must include Windows NT, should additionally include Solaris, may additionally include Linux, and might include all server platforms supported by Perforce.
Analysis Not analysed.
   
2.100 Requirement The set of defect tracking system platforms supported by the integration must include Windows NT, should additionally include Solaris, and may additionally include Linux.
Analysis Not analysed.
   
2.101 Requirement The set of defect tracking database platforms supported by the integration must include Windows NT, should additionally include Solaris.
Analysis Not analysed.
   
2.102 Requirement The set of defect tracking databases supported by the integration should include Microsoft SQL Server and Oracle and might additionally include Microsoft Access and Sybase.
Analysis Not analysed.

3. Summary

  Level of requirement Architecture
Id Measure Critical Essential Optional Nice R S U T
1 Defect tracker state is consistent with the state of the product sources. Yes Yes Yes Yes Yes Yes Yes No
2 The defect tracking integration makes the jobs of the developers and managers easier. Yes Yes Yes Yes Yes Yes Yes Yes
3 It is easy to discover why the product sources are the way they are, and why they have changed, in terms of the customer requirements. Yes Yes Yes Yes Yes Yes Yes No
4 The interface that allows Perforce to be integrated with defect tracking systems is public, documented, and maintained. Yes Yes Yes Yes Yes Yes Yes Yes
5 The integration provides the ability to ask questions involving both the defect tracking system and the SCM system. Yes Yes Yes Yes Yes Yes Yes No
6 Perforce is integrated with TeamTrack by TeamShare. Maybe Yes Yes Yes        
7 The integration supports all of the operations currently available in the TeamTrack integration with other SCM systems. Maybe Yes Yes Yes        
8 The integration uses the TeamTrack C++ interface to the TeamTrack database. Maybe Yes Yes Yes        
9 Perforce is integrated with DevTrack by TechExcel. Maybe Yes Yes Yes        
10 The integration uses the DevTrack COM interface to the DevTrack database. Maybe Yes Yes Yes        
11 The integration supports all the operations currently available in the DevTrack integration with Visual SourceSafe. Maybe Yes Yes Yes        
12 The integration supports versioning of documents in DevTrack's document management interface. Maybe Maybe Maybe Yes        
13 Perforce is integrated with Remedy ARS by Remedy. Maybe Maybe Yes Yes        
14 The integration uses the Remedy API. Maybe Yes Yes Yes        
15 Perforce is integrated with TRACK Defects by Soffront. Maybe Maybe Yes Yes        
16 The integration supports all of the operations currently available in the TRACK Defects integration with other SCM systems. Maybe Yes Yes Yes        
17 The integration uses the TRACK COM interface to the TRACK database. Maybe Yes Yes Yes        
18 Perforce is integrated with Bugzilla. Maybe Maybe Maybe Yes        
19 Perforce is integrated with Peregrine Systems' forthcoming defect tracking system. Maybe Maybe Maybe Yes        
20 Non-Perforce people are able to integrate Perforce SCM with defect tracking systems. Yes Yes Yes Yes Yes No No Yes
21 The amount of effort required to extend the integration to work with a new defect tracking system, in person-weeks. < 24 < 8 < 2 < 1 Yes Yes No Yes
22 The integration design is documented. Yes Yes Yes Yes Yes Yes No Yes
23 The integration is open. Yes Yes Yes Yes Yes Yes Yes Yes
24 The integration is implemented using readily available tools. Yes Yes Yes Yes Yes Yes No No
25 The integration design is modifiable at reasonable cost. Yes Yes Yes Yes Yes Yes No Yes
26 The amount of effort required to port the integration to a new operating system, in person-week. < 24 < 8 < 2 < 1 Yes Yes No Yes
27 The integration is stable. Yes Yes Yes Yes Yes Yes No No
28 Perforce avoids making changes which break integrations by third and fourth parties. Yes Yes Yes Yes        
29 Perforce keeps the cost of changes to the integration small for integrations by third and fourth parties. Yes Yes Yes Yes Yes No Yes Yes
30 The integration is maintained. Yes Yes Yes Yes Yes Yes No No
31 Perforce changes the integration so that it continues to work with new releases of other Perforce software. Yes Yes Yes Yes Yes Yes Yes Yes
32 Perforce changes the integration so that it continues to work with new releases of supported defect tracking systems. Yes Yes Yes Yes Yes Yes No No
33 The integration is supported. Yes Yes Yes Yes Yes Yes No Yes
34 Perforce support are able to answer questions about integrating Perforce with defect tracking systems. Yes Yes Yes Yes Yes Yes No Yes
35 The integration is supportable at reasonable cost. Yes Yes Yes Yes Yes Yes No Yes
36 The integration project delivers the ability to drive Perforce from the defect tracking system early. Maybe Maybe Yes Yes No No No Yes
37 The integration allows routine defect resolution tasks from the defect tracking system alone without the converse requirement. Maybe Yes Yes Yes No No No Yes
38 Developers are able to carry out routine defect resolution tasks using only the defect tracker's interface. Yes Yes Yes Yes No No No Yes
39 Developers are able to get copies of, check out, etc. files associated with a task using the defect tracker's interface. Yes Yes Yes Yes No No No Yes
40 The defect tracker displays the list of files associated with a task and allow actions on (groups of) those files. Maybe Yes Yes Yes No No No Yes
41 Developers are able to carry out routine defect resolution tasks using only the Perforce interface. Yes Yes Yes Yes Yes Yes Yes No
42 Developers are able to see the tasks assigned to them through the Perforce interface. Yes Yes Yes Yes Yes Yes Yes No
43 Developers are able to check out the files associated with a task easily using the Perforce interface. Maybe Maybe Yes Yes Yes Yes Yes No
44 Developers are able to associate the changes they made in order to complete a task with that task. Yes Yes Yes Yes Yes Yes Yes Yes
45 The integration allows association of changes with tasks at submit time. Yes Yes Yes Yes Yes Yes Yes Yes
46 The integration allows association of changes with tasks after submit time. Yes Yes Yes Yes Yes Yes Yes Yes
47 The integration allows changes to be associated with more than one task. Yes Yes Yes Yes Yes Yes Yes Yes
48 The integration allows tasks to be associated with more than one change. Yes Yes Yes Yes Yes Yes Yes Yes
49 The integration allows recording of the impact of the change on the task. Maybe Maybe Yes Yes Yes Yes Yes Yes
50 Developers are able to identify the tasks that they are working on. Maybe Yes Yes Yes Yes Yes Yes Yes
51 Developers are able to find out why (in terms of tasks) they have files checked out. Maybe Yes Yes Yes Yes Yes Yes No
52 Developers are able to find out where the partially completed work on a task is. Maybe Maybe Yes Yes Yes Yes Yes No
53 Developers are able to branch and merge the files associated with a task. Maybe Maybe Maybe Yes Yes Yes Yes Yes
54 The analyst is able to associate (groups of) files with the task. Yes Yes Yes Yes Yes Yes Yes Yes
55 The analyst is able to record the nature of the association. Maybe Maybe Maybe Yes Yes Yes Yes Yes
56 The manager is able to prevent unapproved changes to (groups of) files. Maybe Maybe Yes Yes Yes Yes No No
57 The manager is able to limit users' ability to change files to those associated with issues assigned to them. Maybe Maybe Yes Yes Yes Yes No No
58 The manager is able to limit users' ability to merge changes into a codeline until the change is approved. Maybe Maybe Yes Yes Yes Yes No No
59 The manager is able to limit users' ability to access (at all) files to those associated with issues assigned to them. Maybe Maybe Maybe Yes Yes Yes No No
60 The SQA group are able to produce a report showing that the defect tracker's idea of the status of issues affecting a release matches Perforce's idea of the changes that have been made for that release. Yes Yes Yes Yes Yes Yes Yes No
61 Experience required of administrators of the integrated system. < 1 year of Perl or Python hacking and < 2 years of Perforce administration experience. < 1 year of Perforce administration experience. < 6 months of Perforce administration experience. < 1 week of Perforce administration experience. Yes Yes No Yes
62 The integration is installable by the administrator. Yes Yes Yes Yes Yes Yes No Yes
63 Time taken for the administrator to install the integration for a supported defect tracking system, in person-hours. < 40 < 16 < 8 < 3 Yes Yes No Yes
64 Time taken for the administrator to remove the integration from a supported defect tracking system, in person-hours. < 40 < 16 < 8 < 3 Yes Yes No Yes
65 The integration only makes additions to the Perforce state and the defect tracker state. Maybe Yes Yes Yes        
66 The changes that are made by installing the integration are clearly documented. Yes Yes Yes Yes Yes Yes No Yes
67 The integration logs changes that it makes so that they can be undone in an emergency. Yes Yes Yes Yes        
68 Administrators are able to develop new queries of the defect tracker's database and the relationship between issues, tasks, changelists, revisions, and files. Maybe Yes Yes Yes Yes Yes Yes No
69 Competence required of users of the defect tracker's interface to Perforce. < reasonable Perforce competence. < knowledge of Perforce basics. < basic familiarity with SCM concepts (e.g. a VSS user). < no previous SCM experience. Yes Yes Yes Yes
70 Competence required of users of the Perforce interface to the defect tracker. < reasonable competence with jobs. < reasonable competence with Perforce. < knowldge of Perforce basics. < no previous SCM experience. Yes Yes Yes No
71 Users are able to find out which tasks have affected a file's or group of files' development using either the defect tracker's interface or the Perforce interface. Yes Yes Yes Yes Yes Yes Yes No
72 Users are able to find out why a changelist was made in terms of tasks from either the defect tracker's interface or the Perforce interface. Yes Yes Yes Yes Yes Yes Yes No
73 Users are able to find out which changelists were made for a task from either the defect tracker's interface or the Perforce interface. Yes Yes Yes Yes Yes Yes Yes No
74 Users are able to find out which files are associated with an issue from either the defect tracker's interface or the Perforce interface. Yes Yes Yes Yes Yes Yes Yes No
75 Extent to which users like the integration. > a slap round the face with a wet fish. > the defect tracking system. > Perforce. > a cup of really good coffee. Yes Yes No Yes
76 Effort required of Christopher Seiwald, in weeks. < 6 < 4 < 2 < 1 No No No Yes
77 Proportion of time required of Gareth Rees. < 90% < 75% < 50% < 0% No No No Yes
78 Proportion of time required of Richard Brooksby. < 60% < 30% < 25% < 10% No No No Yes
79 Cost of the project. < some amount to be decided by Perforce. < $300k < $150k < $100k Yes Yes No Yes
80 The integration project uses Perforce as its SCM tool. Maybe Yes Yes Yes Yes Yes Yes Yes
81 The integration project uses open languages and tools. Maybe Yes Yes Yes Yes Yes No No
82 The integration project uses Python as its implementation language. Maybe Maybe Yes Yes Yes No No No
83 The integration project uses Perl as its implementation language. Maybe Maybe Maybe Yes Yes No No No
84 The project increases goodwill toward Perforce from their customers. Yes Yes Yes Yes Yes Yes No Yes
85 The project progress and status are open to Perforce customers. Maybe Yes Yes Yes Yes Yes Yes Yes
86 The project is responsive to customer queries. Maybe Yes Yes Yes Yes Yes Yes Yes
87 The project supports customers trying to integrate with supported defect trackers. Yes Yes Yes Yes Yes Yes No Yes
88 The project supports customers trying to integrate with other defect trackers using the kit. Maybe Yes Yes Yes Yes Yes No Yes
89 The project develops goodwill from defect tracking vendors toward Perforce. Maybe Yes Yes Yes Yes Yes No No
90 The project develops goodwill from supported defect tracking system vendors. Maybe Yes Yes Yes Yes Yes No No
91 The project develops goodwill from other defect tracking vendors. Maybe Maybe Yes Yes Yes Yes No No
92 Testers are able to find out what exactly has changed for a release or because of an issue for white box testing. Yes Yes Yes Yes Yes Yes Yes No
93 Testers are able to manage test suite stuff and changes to that. Yes Yes Yes Yes Yes Yes Yes Yes
94 Testers are able to associate regression test sources and stuff with issues. Yes Yes Yes Yes Yes Yes Yes Yes
95 Customers are able to migrate from their existing defect tracking systems to a Perforce integrated defect tracker. Maybe Maybe Yes Yes Yes Yes Yes Yes
96 The integration copes with multiple Perforce servers. Maybe Yes Yes Yes Yes Yes No Yes
97 The integration copes with multiple defect tracking systems. Maybe Maybe Yes Yes Yes No No Yes
98 The integration copes with organizations that have a many-to-many relationship between their Perforce servers and defect tracking systems. Maybe Maybe Maybe Yes Yes No No Yes
99 Set of Perforce server platforms supported by the integration. >= { Windows NT } >= { Windows NT, Solaris } >= { Windows NT, Solaris, Linux } >= all server platforms supported by Perforce.        
100 Set of defect tracking system platforms supported by the integration. >= { Windows NT } >= { Windows NT, Solaris } >= { Windows NT, Solaris, Linux } >= { Windows NT, Solaris, Linux, Windows 2000, AIX, HP-UX }        
101 Set of defect tracking database platforms supported by the integration. >= { Windows NT } >= { Windows NT, Solaris } >= { Windows NT, Solaris, Linux } >= { Windows NT, Solaris, Linux }        
102 Set of defect tracking databases supported by the integration. >= { Microsoft SQL Server, Oracle } >= { Microsoft SQL Server, Oracle } >= { Microsoft SQL Server, Oracle, MySQL } >= { Microsoft SQL Server, Oracle, MySQL, Microsoft Access, Sybase, Informix, DB2, dBase IV }        

4. General conclusions

This section collects the key issues for and against the architectures.

4.1. Replication architecture

Does not enforce consistency between the defect tracker and Perforce. This means that organizations using the integration in combination with undisciplined development will suffer from inconsistencies which need human attention to resolve (2.1.1).

Is decoupled from both the defect tracking system and the Perforce server (in the sense that it uses public interfaces to both). This means that it is cheaper to maintain and cheaper to adapt than either the single-database or union architecture (2.21.1, 2.29.1, 2.35.1).

Can use open, interpreted and portable languages for easy customization (2.24.1).

Does not need very much of Chris Seiwald's time (2.76.1).

The Perforce server is changed: a relation between jobs and filespecs is added, a field is added to the fixes relation indicating the meaning of the fix, the Perforce API is extended to support these additions, and a post-change triggers is added to all operations that change the Perforce database (2.76.1).

Co-operation is needed from defect tracking vendors to make sure that their tracker-client operations (as replicated back to the defect tracker's database) don't conflict with their own changes to the database (2.1.1).

4.2. Single-database architecture

Requires major changes to the Perforce server and hence too much of Chris Seiwald's time (2.76.2). Consequently it is costly to maintain and support (2.35.2).

Installation and configuration will be tricky because of the need to make modifications to the Perforce server which will change between platforms (2.63.2).

It's risky for a company that is already using Perforce jobs to migrate to using the integration because the jobs need to be copied to the defect tracker's database and it may be hard to get them back again if the installation is ever unintalled (2.64.2).

Hard to implement using open and easily configurable technology - for performance, and because it interfaces directly with the Perforce server, it is in C or C++ (2.82.2). Therefore it is hard for others to adapt (2.21.2).

Can't cope with organizations that use multiple defect tracking systems (2.97.2).

4.3. Union architecture

Expensive and risky to develop (2.76.3). Requires considerable database expertise to solve the problems of distributed transactions (2.1.3). The distributed transactions will impose a considerable performance cost on the whole system (2.1.3).

Expensive and risky to port to each new database (2.21.3).

Because of the complexity of the software, the integration will be costly to maintain and support (2.35.3).

Hard to install and administer, because it interferes with the normal operation of company database applications, which connects to the unifier rather than directly to the database, and because it interferes with the normal operation of the Perforce server, for which a new version or configuration is installed (2.63.3, 2.64.3).

Hard to implement using open and easily configurable technology - for performance it is in C or C++ (2.82.3). Therefore it is hard for others to adapt (2.21.3).

Can't cope with organizations using multiple Perforce servers (2.96.1) or multiple defect tracking systems (2.97.1).

4.4. Tracker-client architecture

The tracker-client architecture does not support defect resolution using Perforce's interface (2.41.4) so is combined with one of the other architectures for the project to succeed.

The Perforce client API is well documented in order for defect tracking vendors and others to be able to implement interfaces to Perforce (2.21.4).

Defect tracking vendors need to develop an interface to Perforce changelists in order for developers to be able to record the reason for their changes (2.39.4, 2.44.4).

Defect tracking vendors need to develop an interface that allows developers to branch and merge files associated with a task (2.53.4).

Defect tracking vendors need to develop an interface allowing an analyst to record the nature of the association between a file and a task (2.55.4).

A. References

[Bannister] "CVS and the Microsoft Source Code Control interface"; Preston L Bannister; <URL: http://members.home.net/preston/cvsscc.html>.
[Brooks 1995] "The mythical man-month: essays on software engineering"; Frederick P Brooks, Jr; Addison-Wesley; 1995; ISBN 0-201-83595-9.
[RB 2000-02-02] "Steps to defect tracking"; Richard Brooksby; Ravenbrook Limited; 2000-02-02.
[RB 2000-05-05] "Requirements for defect tracking integration"; Richard Brooksby; Ravenbrook Limited; 2000-05-05.
[RB 2000-06-08] "Platform analysis of proposed architectures"; Richard Brooksby; Ravenbrook Limited; 2000-06-08.
[RB 2000-06-22] "Architecture analysis" (written notes); Richard Brooksby; Ravenbrook Limited; 2000-06-22.
[Gilb 1988] "Principles of software engineering management"; Tom Gilb; Addison-Wesley; 1988; ISBN 0-201-19246-2.
[Perforce TN 34] "Windows development tools and Perforce's SCC API"; Perforce Technical Note 34; <URL: http://www.perforce.com/perforce/technotes/note034.html>.
[GDR 2000-05-08] "Architecture proposals for defect tracking integration"; Gareth Rees; Ravenbrook Limited; 2000-05-08.
[GDR 2000-05-11] "Defect tracking project meeting, 2000-05-11"; Gareth Rees; Ravenbrook Limited; 2000-05-11.
[GDR 2000-05-24] "Project requirements"; Gareth Rees; Ravenbrook Limited; 2000-05-24.
[GDR 2000-06-30] "Defect tracking API description and analysis"; Gareth Rees; Ravenbrook Limited; 2000-06-30.
[Remedy 4.5 FAQ] "Action Request System v4.5 frequently asked questions"; Remedy Corp; <URL: http://www.remedy.com/solutions/core/arsystem_faq45.htm>.
[Shaw 2000-06-05] "RE: Perforce defect tracking integration: need TeamTrack C++ API" (e-mail message); Kelly Shaw; TeamShare; 2000-06-06 17:21:14 GMT.

B. Document History

2000-05-30 GDR Created based on [GDR 2000-05-24].
2000-06-01 GDR Added tracker-client architecture. Referenced [RB 2000-05-05]. Analysed requirement 21. Analysed requirement 28. Clarified requirements 39 and 43 for consistency with [GDR 2000-05-24]. Analysed requirements 5, 8, 10, 11, 14, 17, 18, 39, 42, 43, 85 and 86. Analysed requirement 1.
2000-06-02 GDR Added information on Remedy supporting the Microsoft SCC API.
2000-06-05 GDR Analysed requirements 37, 76, 77, 78. Added [RB 2000-02-02] to references.
2000-06-06 GDR More analysis for requirements 1, 6, 21. Added [Brooks 1995] to references.
2000-06-07 GDR More analysis for requirements 21, 76, 77 and 78. Added important requirements to introduction.
2000-06-08 GDR Fixed analysis of learning effort for requirement 21 and propagated change to requirements 77 and 78. Added analysis of union architecture based on platform analysis [RB 2000-06-08].
2000-06-09 GDR Converted document from table to text with headings. Added summary table.
2000-06-30 GDR Made references link directly to target, rather than the references section.
2000-07-04 GDR Reformulated summary table so that it has measure and target values for each requirement, following [GDR 2000-05-24]. Added requirements 96-102. Added comments written by RB on 2000-06-22.
2000-07-05 GDR Added conclusions for union and single-database architectures based on [RB 2000-06-22]. Wrote some conclusions for replication and tracker-client architectures. Updated platform requirements to correspond to [GDR 2000-05-24].
2000-07-06 GDR Added purpose to introduction. Transferred some analysis from the summary table (section 3) to the main text (section 2), leaving "Yes" or "No" in the summary table as appropriate. Added some additional analysis. Covered requirements 1, 3-26, 28-29, 36-59, 61-65, 76-80.
2000-07-07 GDR As 2000-07-06, covered requirements 66-68, 71-73, 81-83.
2000-07-09 GDR As 2000-07-06, covered requirements 27, 30-35, 74-75, 84-86, 92-95. Reformatted analysis into a table to reduce space devoted to repetitive headings.
2000-07-10 GDR Tidied XHTML source. Cells in the summary table are now links to the analysis they summarize. Removed or replaced broken internal links. Improved phrasing of analysis to avoid modal verbs where possible. Covered requirements 2, 60, 69-70, 87-91, 96-102. Added paragraph to introduction describing impact estimation method. Added spacer rows in main table.

Copyright © 2000 Ravenbrook Limited. This document is provided "as is", without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this document. You may make and distribute verbatim copies of this document provided that you do not charge a fee for this document or for its distribution.

$Id: //info.ravenbrook.com/project/p4dti/doc/2000-05-30/arch-analysis/index.html#33 $

Ravenbrook / Projects / Perforce Defect Tracking Integration / Project Documents