Altera: IP-based Design & Verification Techniques

Yaron Kretchmer, Altera (edited DAC panel transcript)

I’m going to continue the thread about how you win the hardware design race, and talk about my four top technologies, how we implemented them, and to what extent.

  • The one issue we struggled with previously is that we had multiple data sources, multiple repositories, using multiple systems from different vendors. This caused difficulty in staging one consistent set of deliverables. Imagine that you have your full custom deliverables in one system, your SoC deliverables in another, and you’re interested in staging one complete collateral; that becomes difficult when you have multiple data sources. In addition we had the combination of legacy systems, and systems that were introduced more recently.
  • What we transitioned into is one repository; we made a concerted effort to highlight the similarity of use models between IC Manage and Perforce. What this let us do is to version everything in one consistent way, and get to a point where we are able to stage one set of collateral that contains both our IP and our software. And for us at Altera, this enabled us to stage more quickly.
  • The other challenge we had is what you do with testbenches. Testbenches are an integral part of your IP development but do you have them as part of your data management system. Some people do, some don’t. Historically, we used to treat some testbenches as separate from the way we would treat our IP, which made it difficult to share those testbenches with our IP deliverables.

So what we moved into with IC Manage and Perforce is an ability to integrate our testbenches with our IP, which lets us leverage all the power of branching that comes with Perforce through IC Manage. This lets usstage multiple versions of the IP, multiple versions of testbenches and branches as we do our development.

  • One obstacle we had internally is a fear of checking in large files. Our people had a bad experience with using a legacy data management system where checking a large file used to take a lot of time. One anecdote that I want to share is a Pixar presentation we saw at the Perforce conference. Pixar just checks in everything into Perforce, with the biggest check-in being more than a terabyte.

We haven’t been that adventurous, but we have checked in fairly large databases coming from place and route, we’ve checked a multitude of SPEF files, and IC Manage and Perforce just deal with it pretty much at wire-speed.

  • A third direction – I’m going to continue the theme that Nigel talked about – is a bi-directional link between the bug tracking system and data management system.

We know that a variety of engineers are forced to tape out designs which include bugs that they are unaware of. But I would also say that in some cases, when schedule trumps the ability to fix all your bugs, you need to release a device to production which has bugs that you are aware of.  It’s not always the things you don’t know, it’s the things that you know. You can make a conscious decision to tape out even though there are existing bugs, if you know you can work around them. Having a bi-directional relationship between your data management and bug tracking lets you see exactly where your bugs are, so that later on, when you have a chance to fix them, you know exactly which stage of the data management system you need to do that.

Now one lesson that we learned is that it’s a good thing to choose your bug tracking system carefully. Some systems integrate out of the box with Perforce and IC Manage, some don’t, but with that little caveat, the integration is crucial and very well-supported.

  • The last point I want to talk about are verification checklists. The system we put together in Altera is a checklist-based system. Quality checks that are integrated into the data management system. What this lets you do is strike the balance that fits for your company between being too loose, in the sense of releasing anything regardless of quality, and too strict, in terms of not being able to make any progress until everything is perfect.

We struck a balance where we have some basic checks which are mandatory, but designers can still execute a release if they get failures. And there is a human release manager whose role is to either let a release through or not.

So since we had to integrate the system fairly quickly, we set  goals in terms of reaching a common platform “top-down”.  We ensured that the infrastructure was adequate and we invested a lot in the hardware needed to enable the different sites that we have to work successfully.

It took us about a month to release everything to a small group, and another month to deploy broadly. And we accelerated the adoption by keeping the deployment as much as possible to just out-of-the-box usage. What this let us do is to leverage the existing training, keep to the existing and well-documented data management command structure, and let us leverage the really great performance that you get from the system. So we kept the amount of triggers, the amount of layers that we put on top of the system to a minimum so that we could keep it simple.

How do you measure success? In my view, success of a data management system equals “boredom”. So your goal is to fade into the background, to basically get out of the way so the people that do the real design can work unhindered.

So we staged our first production design. We got it to a point where it is used by 600 people in 3 major sites. So far, I just checked this morning, and it’s actually up to 13,000 workspaces – we do regression based on the data management system. So, once we get to that point, where people were just using the system, this was our measure of success.

And that’s all I have, thank you.


Audience Question: Do you have any experience using behavioral versions of the design in conjunction with the more traditional versions?

We use behavioral IP at a variety of levels, starting at C and going to System C, System Verilog and just Verilog. We really don’t see a major difference between behavioral IP and any other level of abstraction. You have very robust development methodology.

In my mind at least, behavioral IP is pretty much software. So as far as data management systems go, there are very robust methodologies on how to deal with a data management system when you do software development. There are a lot of co-development opportunities, branching and merging. All of those, since IC Manage sits on top of Perforce, are readily available through IC Manage. We use them quite successfully.

The team that is doing the behavioral IP development and the testbench development is definitely the most advanced in our design community. So we have not run into a lot of issues with that. I would really be interested in understanding what issues we should have run into and didn’t.

Audience Question: How do you manage the CAD environment – scripting, tool versions, contours – around the versions of the design such that you don’t have to have a copy in each workspace, i.e. have a data explosion?

So what we’ve done, is we have most EDA tools out there and most revisions of those EDA tools just installed. We created a set of configurations that are controlled by Perforce, and those are stageable on a per workspace basis. So to answer your question, we capture the meta-data, essentially a signature of what the set of tools are that apply to a certain revision, and we capture that in the data management system. So we can go back and see what the set of tools are that were used for that specific workspace.

It seems that the models are all applicable. The difference is whether you stage the ‘data’ of the tool, or if you stage the ‘meta-data’. So you can have the tool itself installed and reinstalled on a workspace basis, but this doesn’t scale. Or you can have a central area if you will, where all the tools are installed and you just capture the meta-data. The meta-data is now small enough that you can now revision it together with your workspace.

Audience question: Are metal fill libraries stored in IC Manage?

We store it in IC Manage. The system gives you the ability to control the number of versions that you keep on a per file or per file type basis, so as Simon was saying, we just keep it to a minimum. There is really no point in saving more than a single version. We do revision again, but I think it’s as valid to keep it.

Audience question:  How do you deal with tool or methodology changes for the IP and the impact of the changes on the IP?

“Very carefully” is the short answer. One of the advantages of the system that we have, where all the tools versions are fully revisioned, it that gives you a mix and match capability that is fully trackable. Once you have that, and are careful about not using the wrong tool at the wrong time, then you can always go back, find out where the error came from and fix it. So my answer is you integrate the tool installation itself, or the meta-data – which tool and flow you are using – into your data management system. Then once an issue is tracked, you can always go back and find out where in the tool chain was the problem.


Yaron Kretchmer manages the engineering infrastructure team at Altera.