Advanced OpenSource Design Management for 4.4

Shiv Sikand, Silicon Graphics, Inc. (International Cadence User Group Conference, September 10-13, 2000, San Jose, California)

Abstract

This paper describes an ultra high performance OpenSource Software Configuration Management (SCM) system for Cadence DFII 4.4 based on the Perforce Fast SCM System.

Introduction

SCM techniques are well defined for software development efforts. Hardware design has a different set of requirements mainly due to the complex mix of both ASCII and binary data that eventually defines the physical mask layout of an integrated circuit.

SCM is not to be confused with revision control. Revision control is an important part of SCM, but only one piece of the overall puzzle. An intuitive configuration and release scheme is missing from many of the available tools and particularly those in the DFII space. The range of skills required to build a chip are very wide and many different tools are required to complete the design. There is also a wide range of users, in each skill set, ranging from highly experienced to novice. In order for an SCM system to be successful in such an environment, it needs to be robust, high performance, scalable and have low maintenance requirements. This paper describes such a system developed at SGI and now available as OpenSource, which overcomes the feature and performance limitations of all currently available DFII integrations.

Historical Background

SGI has been wrestling with introducing SCM to the hardware design activity to match the processes used in the IRIX Operating System and Advanced Graphics software divisions. These processes have resulted in an efficient, highly productive and streamlined product design and delivery platform for large software systems.

In the chip design space, we started investigating the use of TDM−LD for 4.3 with Spectrum Services after finding the native 4.3 system to be quite limiting. We then tested a pure TDM environment for 4.4. The biggest obstacle to successful TDM deployment was its speed and complexity. In addition, our goal was to have a common system for both the software and hardware aspects of the chip process. The RTL, CAD and system software teams were extremely reluctant to use it since it did not meet their targets in terms of functionality and performance. We needed a solution that would allow us to manage all aspects of the hardware design flow, not just the layout and schematic data.

Tool Requirements

Our first goal was to find a tool that would meet our requirements. A plethora of tools are available which make a large number of claims as to their capabilities, but in practice many of these tools are fundamentally flawed both in architecture and execution.

The specifics of the Cadence database, namely co− managed sets of files, was an interesting issue since we wanted a solution that would not hamper performance. We observed that all existing design management (DM) implementations for DFII use an additional file for each co−managed set to express the version dependencies. This almost always has a large performance hit and prevents easy manipulation of the CDB data.

The global requirements were as follows:

  1. A common tool for all files in the hardware design process. This was a particularly important requirement so that we could have a consistent methodology across Software, RTL, CAD, Verification and Physical Design.
  1. High performance and scalability since the repository sizes were expected to approach many hundreds of gigabytes with over a hundred users.
  1. A powerful and flexible release and configuration management architecture.
  1. Support for geographically distributed design teams.
  1. A strong procedural interface and/or API for tool integration.
  1. Low cost (capital and recurring) and low maintenance.

Requirement 1 was the hardest to fulfill due to the political considerations involved, since it touched such a large group of people. However, we felt this was very important ( and the approach has since been vindicated) and a lot of effort was put into co−ordinating the individual team requirements.

Within SGI, a number of tools were already in use.

  • Ptools, an in−house RCS derivative
  • ClearCase
  • CVS/RCS
  • Perforce
  • TDM

Each of these tools had their own champions and we needed to find a common ground. Ptools was easy to reject since it had no support for binary data. ClearCase was attractive from a feature set point of view but had proved to be a major problem at SGI. The performance was very poor and requiring custom kernels for every OS release was a big issue for machine administration. We experienced corruption with the VOBS on numerous occasions and we also had a full time administrator solely for ClearCase and were anxious to have a better solution. CVS was extremely popular and well liked, but it fundamentally lacked SCM features for release and configuration management which were added on by each group according to their needs. Most of these solution were symbolic name or “tag” based and were unwieldy and inconsistent.

We were already unhappy with TDM for DFII use and thus did not consider expanding its use outside of DFII.

Perforce was originally brought in to SGI because of its ability to run on both NT and IRIX as well as its remote depot ability to support distributed teams. It had a loyal following and the number of licenses in use was growing on a monthly basis.

In addition, we considered

  • Spectrum’s CRCS
  • Synchronicity
  • A custom integration by Spectrum Services CRCS was rejected because it was just RCS as the name suggests. We were impressed by the marketing claims of Synchronicity but were not able to find many people who were willing to recommend it to us, mainly due to its extreme slowness, poor scalability and lack of a cohesive release and configuration scheme. A custom integration was instantly rejected based on a very large six figure quote.

The field was now down to Perforce or a choice of some 30 other available solutions including Continuus, Razor, PVCS, StarTeam and SourceIntegrity.

We chose Perforce over these solutions for a number of reasons:

  • Unmatched performance and scalability.
  • Atomic transactions. This feature guarantees a group of files that are submitted together will stay that way, and that this information is persistent, unlike ClearCase groups.
  • InterFile Branching. A very powerful feature for handling configurations.
  • Change based client server architecture. Every transaction in the system is assigned a unique change number, making it very easy to track changes throughout the entire tree. Since state is stored centrally in the server and not in the workspaces, data recovery, mining and reporting tasks are fast and easy to perform.
  • Remote depots for geographically distributed teams.
  • Appears to run on every single hardware platform known to man.
  • Low cost, including a free API and demonstrated low maintenance.
  • A very positive and growing user base, both internally and externally to SGI.

The Perforce Advantage

InterFile Branching (IFB) is the cornerstone of this implementation and the reader is directed to [1] for a more detailed technical explanation.

But first, why branch at all ? [3][4]

The answer is quite simple. In order to create workspaces that are controlled by the owner of the workspace and not by the tyranny of the tree, one must branch to create a specific configuration that represents some given state. I use the term tyranny since in a single codeline model where there is just a bunch of versions of a large collection of files, the workspace is a slave to its state of synchronization with that codeline. It can either be in sync, or not. It can easily go into a sync state but will often struggle to return to some previous known state. Most SCCS derivatives (RCS, CVS, RCE) are archivers with such a primitive branching mechanism that they are difficult to use. Instead, symbolic names or “tags” are used to mark collections of files in order to represent state.

Tags are generally very awkward for a number of reasons:

  • They are not incremental, so data must be continually re−tagged
  • It is difficult to visualize or get reporting information on the difference between two or more sets of tags
  • They proliferate and make tree management more complex than it needs to be.
  • It is a generally slow operation in most databases, both for the creation and recovery of file sets.
  • It is an explicit operation. I.e. if you didn’t tag something, it is next to impossible to return to that state.
  • In DFII (or any other database), traversing the hierarchy using Skill or ITK (or any other internal tool) to build a tag set is an inefficient way to map data sets and requires the database to be active in order to make a configuration.

InterFile Branching, coupled with Atomic Transactions and a Change Based architecture, eliminate the need for the majority of tagging operations. Specific state is always recoverable, since every change has a transaction associated with and the Perforce relational database allows you to recover sets of files to any point in time. This is the essence of well implemented configuration management from a tool perspective.

Prior to the introduction of commercial tools to the data management problem, most companies used some combination of tar, copy or move, which we call TCM. This usually requires the notion of a ‘freeze’ followed by a copying or archiving activity. Custom built TCM systems are typically highly effective, but are expensive to maintain and run.

Perforce begins with this natural model − copying software files and renaming them − and finishes it with a collection of techniques that make its model usable. Apart from its fundamental difference in the way files are renamed (which is part of the key to the branching technology) are some key features:

  • Virtual copying

In practice, if branching a file means copying it in the repository then storage space can be a concern. A simple antidote is to perform a virtual copy, where the newly branched file makes use of the contents of the original file. This requires a level of indirection between the repository name space and the actual underlying object store. When the branch is extended by adding a new revision, the branched file can acquire its own separate entity in the object store.

Supporting variants with virtual copies reduces the space requirement of a variant to be merely a record for the newly created variant that points to the original file. This is, more or less, no greater than the cost of a traditional variant.

This virtual copy can also be used when one variant is explicitly synchronized with another. If the user wishes to fold a branch back into a trunk and make the two identical, both can reference the same content at that point.

  • Integration history

The change from virtual copy to entity is tracked internally and provides a complete audit trail and can make for comprehensive reporting. Through transitive closure, it is possible to compute for any revision of any file whatever deltas it incorporates from other files through first, second, third, etc. generation merges or replaces.

  • Atomic transactions

They allow sets of files to be tracked together. The key to a good SCM system is its ability to track change packages and the ease by which these change packages can be propagated. Even though each file in a codeline has its revision history, each revision in its history is only useful in the context of a set of related files. The question “What other source files were changed along with this particular change to foo.c?” can’t be answered unless you track change packages, or sets of files related by a logical change. Change packages, not individual file changes, are the visible manifestation of software (and by extension, hardware) development.

Perforce and CAD

Since much of CAD data is binary, we choose a branching model that is different to pure software development, namely branch and replace instead of branch and merge. However, it is important to understand that while branch and replace is the underlying configuration model for chip development it does not exclude or limit branch and merge in any way. In fact, our software and RTL teams use branch and merge within their own branches, but a branch and replace model is used to manage the main codeline or mainline.

In previous design flows, we observed that the copying of frozen data to create snapshots was a major requirement, but that these snapshots became stale very quickly, particularly in the early stages of a project. New data was introduced at a rapid rate which invalidated the current snapshot. As we moved closer and closer to tapeout, snapshots tended to be taken less frequently and change was continually feared since small perturbations could very easily result in design corruption. The chip managers were reluctant to allow the introduction of new data while the engineers felt it was imperative. In the end, a balance was usually struck, but was a time consuming process since it involved continuous face to face discussions and the manual tracking of a large number of file dependencies.

Our vision for the snapshot data was to use IFB as the underlying method for replicating managed data in the system.

The integration history automatically allows one to see the pending changes to a snapshot, i.e. only the files that are different in a branch need be integrated back to the mainline and vice−versa.

This allows a very powerful, incremental approach to managing configuration data.

The transaction based nature of branch updates allow transactions in either direction to be easily undone. The importance of this cannot be overstated. In a traditional mainline model, the users have some part of the design tree in their own workspace. Once the mainline is updated and the workspace synchronized, the user typically has no way of going back to the state that used to exist prior to the synchronization. Some form of release management can be used to mitigate this to some extent, but the granularity of such a system is usually very coarse. It does not allow the fine control of a workspace that users typically need to remain productive with respect to mass perturbations introduced from the mainline.

By using IFB to create the workspace, all transactions that update the branch are part of the branch history. This allows total control of the workspace by the user. The synchronization with the mainline is now controlled and updates can be undone at any time to preserve the integrity of the branch.

The user can make edits to files and perform intermediate checkins as a checkpoint mechanism. These edits are essentially invisible to the mainline, but allows a detailed version history to be maintained in the branch. This is typically impossible in a mainline model, since the checkin shows up in the mainline as soon as it is complete. When the user is happy with a certain set of edits, they can typically run some verification procedures and then integrate the set of changes in their branch back into the mainline as an atomic transaction. The mainline changes then typically appear as sets of complete data, rather than a random set of changes to arbitrary files, which is a very elegant alternative to continual file tagging.

An integration policy prevents stale data from being reverse integrated into the mainline. Simply stated, all pending changes into a branch must be forward integrated before any changes can be reverse integrated. However, the policy is site specific and not a tool requirement. Again, since the integration record tracks dependencies, the list of files in either direction is generated automatically, allowing the user to analyze and review the impact of all changes in either direction.

Design Framework II

Integration

In 4.4.1, prior to the release of GDM, the customer integration method was through the use of Skill triggers. The Skill trigger model is an excellent fit with the Perforce edit/submit model [2]. We evaluated GDM with the 4.4.2 release and decided to continue to use the Skill triggers to perform our integration since GDM lacked an interactive interface as well as an error recovery mechanism. The only catch with the Skill integration was that tools that did not run Skill would be excluded (i.e. pipo, nino). However, since the majority of our use was Virtuoso and Composer we decided to press ahead and later build a lightweight GDM implementation for the non−Skill tools.

The interface treats all Cadence checkins and checkouts as atomic transactions. This guarantees the correctness of the co−managed sets at all times and gives us the powerful feature set described. [5]

An additional issue was the Library Manager. We wanted to have access to the versioned system in a similar way to 4.3 and there was very little customization possible with the existing tool, so we decided to build our own browser. These two decisions were key to decoupling us from requiring new releases related to GDM issues and allowing us to build a very high performance interface.

We used the Perforce API to build a custom client that could be run as an IPC process. The new (to 4.4) ipcProcess interface was significantly faster than the hiProcess interface in 4.3 and using the ‘fisrvDoesOp’ feature of GDM 2.0, we were able to avoid any unnecessary forks to execute Perforce commands.

The Skill triggers that we used are as follows:

PostCreateLib, PreDeleteObj, PostDeleteObj, PostCreateObj, PreCheckout, PostCheckin and ddRegUserTrigger.

The file information server that is required by GDM is essentially a specially crafted NULL daemon and thus has virtually no impact on the overall performance of the system.

Our only major gripe with the DFII DM architecture is that both ddCheckin and ddCheckout use filesystem read/write permissions to set their actions. This is also true in a pure GDM system and we would prefer it if there were two additional triggers or other mechanism that could be used to query the file status rather than rely on a permission bit. Additional code had to be added to the user trigger to prevent loss of data when permission bits were accidentally toggled.

Integration Features

  • A new library browser (using Skill list boxes) allowing a central point for all DM functions.
  • A file synchronization interface that allows the free movement in revision space of a library file, category or cellview, as many times backward of forward as the user may require.
  • A library synchronization interface that allows the entire, or selective contents of a library to be restored to any point in time or change number space, as many times backward or forward as the user may require.
  • The ability to create unlimited, un−managed versions of the same cell for simultaneous viewing purposes.
  • An Integration Wizard that allows change package propagation to be handled automatically, both in the forward and reverse direction with full user control.
  • Full support for all auto−checkin and auto− checkout functions as well as the the Skill ddCheckin/ddCheckout interface
  • A library/branch−specific change number browser.
  • Show checkouts, show versions, checkin, checkout and cancel checkout functionality bound directly to the browser, similar to the 4.3 user interface.
  • Interactive operation of all features allowing for error trapping, reporting and recovery in both hiGraphicMode or in nograph mode.
  • Ultra high performance does not require any waiting for operations to complete, unlike the commercially available solutions. Most operations complete in centiseconds, including version recovery and branch synchronization.
  • Fast, lightweight utilities for CDB files. e.g. showcheckouts, buildbranch, etc.. These can be run from a simple low bandwidth terminal. The average reporting time for our showcheckouts Perl script in our environment, with typically 400−500 checkouts distributed among 50+ users is6 seconds.
  • CDB data appears as any other data in the system and can thus be manipulated just like any other file using native Perforce commands.

GUI screenshots

This is an image of the main Library Browser interface.

Multiple objects can be selected to have the same operation performed on them.

The text field at the top either shows the name of the library client or reports “Not Revisioned”.

The browser is split into four sections, one each for available libraries, categories, cells and cellviews.

tech-advanced-opensource-1

This image shows the Library Commands Pulldown.

The most interesting ones for daily use are Integration Wizard, Show Changes and Show Checkouts

tech-advanced-opensource-2

The Category Commands pulldown is straightforward and allows simple category management.

tech-advanced-opensource-3

The Cell Commands pulldown is also fairly intuitive. The His button brings up a history of opened cells and allows the user to open then in edit or read mode.

The Find feature allows you to search using a regular expression. It highlights all matches and also reports them in the CIW.

The Move function allows the user to move cells into an existing or new category, creating them as required.

The Options allow you to view the list sorted alphanumerically or by creation date which is a useful feature.

The lbbUserFunc allows a user specified routine to be run on the selected objects.

tech-advanced-opensource-4

The Cellview commands allow you to Checkout or Cancel Checkout.

The default double click action can be user specified to be Edit or Read, or the appropriate action can be chosen from the pulldown.

The Show Versions form is activated from this list of commands.

tech-advanced-opensource-5

tech-advanced-opensource-6

The Show Versions form displays the available versions and also reports their integration history. In this example, the cell was branched from the mainline at Version 1 and then went through 3 subsequent checkins in the branch.

If the cell had been released as part of a change package back into the mainline, an entry below the appropriate version would indicate this event.

The Revert Version function allows you to select any available version and make that the current edit. This can be done as many times as one requires.

The Build Version copy allows you to ’clone’ a version and typically creates an unmanaged copy of the cell with a suffix to identify it.

tech-advanced-opensource-7

The Show Checkouts form displays all the checkouts and the user name. The All Clients radio button allows the viewing of checkouts in multiple branches.

The Preserve Edit mode re−checks out any cells that may be opened in the current session so that the users environment is not disturbed. The Preserve Read mode preserves the window environment but leaves the checked in cells in Read mode

tech-advanced-opensource-8

The Show Changes form is a change number browser for either the selected library or all libraries in the branch. The Re−synchronize feature allows the user to sync the state of the entire library to the time of the selected change number.

The Describe change radio brings up the Perforce change description in a text window.

tech-advanced-opensource-9

The Perforce change description for a selected change. This change actually shows a large set of files that make up a reverse integration set.

tech-advanced-opensource-10

The Update Changes form is also known as the Integration Wizard. The user can select the direction of integration and either view the pending changes or commit them to their branch, on a global or individual library basis.

This is a very powerful mechanism for analyzing and tracking the potential impact to your workspace.

This example shows a branch that is considerably out of date, but can nonetheless be updated to the mainline state simply and expeditiously.

tech-advanced-opensource-11

In Update the MAINLINE mode, the Integration Wizard shows the pending differences between the users branch and the mainline. Branch/sync indicates newly created objects in the branch that don’t yet exist in the parent.

Integrate indicates a new revision of an object. Looking at the example of ’TR slat layout’, version 1 was the initial branch, versions 2 and 3 were intermediate checkins and version 4 is the potential replace target in the mainline.

If the integrate option was selected, all these objects would form an atomic transaction and generate a single change number.

Performance

The infrastructure consists of a few key components and shows the relatively modest line counts required to achieve the integrated SCM goal.

Table 1: Low complexity [5]

tech-advanced-opensource-12

While many new features continue to evolve, the overall time spent to provide core functionality was less than 6 man months, but the system now has over 2 man years of development and is fairly stable and virtually bug free.

Table 2: Data sizes [5]

tech-advanced-opensource-13

Table 3: Typical Branch data

tech-advanced-opensource-14

Table 4: Performance on big queries. All times are in seconds. [5]

tech-advanced-opensource-15

The first entry is a request to Perforce to display the entire list of files in the depot. The second is a simple syntax that asks for the names of all files ending in .v and pipes the result to a line counter. The third entry is a request for the entire change number list, piped to a file, since screen update time on a terminal masks the true operation speed of the command.

Conclusion

The underlying architecture and design of Perforce is clearly suited to large databases and its rich feature set makes powerful SCM techniques easily available for hardware design. The overall performance of the DM system is orders of magnitude higher than other commercially available tools. The Skill portion of the interface is provided in source format under the BSD license. The GDM components are Cadence proprietary and are supplied as binary objects, but since the implementation is decoupled from GDM, the entire functionality is contained in Skill. The integration is available online at http://sourceforge.net/project/?group_id=3799.

References

[1] Christopher Seiwald , InterFile Branching, Sixth International Conference on Software Configuration Management, Berlin, 1996

[2] Shiv Sikand, Cadence−Perforce Integration,

Perforce ’98 User Conference, Oakland, CA

[3] Stephen Vance, Advanced SCM Branching Strategies, Perforce ’98 User Conference, Oakland, CA

[4] Laura Wingerd and Christopher Seiwald, High−level Best Practices in Software Configuration Management, Eight International Conference on Software Configuration Management, Brussels, July 1998

[5] Shiv Sikand, Integrated SCM for Hardware Design, Perforce ’99 User Conference, Berkeley, 1999

2017-02-03T20:09:16+00:00