CSR: IP-based design & verification best practices, IC Manage GDP

Nigel Foley, Xilinx (edited DAC panel transcript)

The first question we were asked was which critical dependency management technologies we deploy for design.

  • The first thing that comes to mind is that we no longer design IP by chips, but rather by high-level block function. So we design Bluetooth, Wi-Fi, and different radios, and different analog components. They are self-contained, granular, and perfectly set up for reuse.
  • How do we use our IP blocks? Well we minimize modification. We want to keep our tree as small as possible; in IC Manage terms, think “reference”. We want to point to existing IP: it doesn’t make a copy, it doesn’t make a branch. If referencing won’t do, then let’s integrate it. Remember you lose nothing by starting with a reference or a pointer; then you can then at any point change and branch it. You can’t do the same at local, as local is always new IP. So keeping our IP data small, while avoiding starting with a blank sheet, is critical.
  • The last point on design is to stabilize and provide initial IP for downstream teams, such as chip integration digital teams. Use early releases to combat shifting sands; there is nothing more frustrating than “it worked yesterday, and it doesn’t work today”. So release early, release often. Get your chip integration and digital teams working alongside each other. Incremental data deliveries will help avoid surprises and pipe-clean the flow.

What about verification and bug tracking?

  • It’s critical to link your design management and your bug tracking system together. One of the biggest advantages of doing this is you can always put something you are going to do in a bug tracking system. You can plan for the future: we need to provide a test for that, we need to fix this. And it’s a much bigger relational database than just what you can attach to the actual IP itself. You can capture why it has changed, and you can capture relationships between the data.
  • Make verification and specification part of the IP as well. Try and have everything self-contained, encapsulated. Try having self-checking testbenches, models – Verilog A, Verilog AMS, all your assertions; have it all wrapped up in one piece of IP so it’s there when it needs to be reused. Verification and any waivers on the IP need to travel. You won’t always be able to get back to the designer and ask him “why did you do this?”
  • Run regression tests as well if possible. When you are using that IP, integrate it early on into your design, and regression test it if you can – on release branches, on active development branches, and do it early and every night if possible.

What methods or incentives do we find most effective in improving efficiencies and gaining conformance to these dependency management processes?

  • Have a group of people that designers can go to, to ask about the IP. Have these experts available to work when projects are starting, to advise people on the best IP to use. Can we get away with a reference, or does it need to be integrated? Is this brand new IP? Set it up properly. Start design from the best possible place.
  • At the same time, drive home that IP reuse is part of the design. It’s not just putting down transistors anymore. You’ve got to think about these things. And then enforcing it – make IP a part of the design process and the documentation.
  • Have compulsory alpha and beta deliveries to your chip and integration team – all your downstream teams. Have a standardized data handoff. It’s really important, going from an analog world, which is a little bit ad hoc, and going into a digital world where everything is heavily scripted, that you have standardized IP in the same format, suitable for use in a scriptable environment – automate that.
  • Checklists and documentation for design review. Capture all your IP, what you’re using, when you’re using it, what release, and what branch. Everything should be captured.

What gains have we seen from implementing these kind of approaches?

  • Let’s start at the end: Chip integration. Work can start sooner, because you are now giving the chip integration teams – the guys who put the ICs together – skeletal alpha/beta deliveries very early on in the process.
  • Because it’s standardized IP, you’ve got an automated process for encapsulating all the files and putting them in all the right places. It’s more uniform and consistent.
  • And because the chip integration teams are brought into the design as early as possible, you end up with better design choices. Our top level guys can talk with our design teams to make sure that we have optimum pin placements and that we get good designs out.
  • The design teams reduce the project design and the verification time. Because our IPs are kind of bigger building blocks, we can divide and conquer. Different teams can work on different IP functions, and we can put them all together. One very important item is the reduced risk aspect.
  • Silicon-proven IP is the best type of IP. If you use silicon-proven IP as a building block, you can drastically reduce the problems on your chip. And everybody knows that our mask prices are escalating, so this is a really important one. It allows you to reuse all the work that you’ve done before and take short cuts on the on the verification side because it’s already validated.

Design teams transition from “I’m working on this cool chip”, to “I’m working on this cool IP that’s going to go into seven different chips.” This is a big transition that design teams will go through.

These are some of the benefits that we’ve had from reusing the IP and keeping it consistent in our organization. Thanks.


Audience Question: What happens if you are using multiple different DM systems with IC Manage, and you want to work with all of them simultaneously?

We’re lucky in the fact that we do just have Perforce and IC Manage. But we have dabbled around the foreign depot and successfully brought in IP from the Perforce side. But I don’t see any major problem with using that methodology, as long as IC Manage can support it.

Audience Question: How do you manage the CAD environment – scripting, tool versions, contours – around the versions of the design such that you don’t have to have a copy in each workspace, i.e. have a data explosion?

We develop our entire CAD environment inside of IC Manage. We treat it like a project. And we have a system in place that allows us to just populate it once, in one area per project, and then we can independently control which version of the tools and which CAD flow on a per project basis. So we definitely don’t put a version of everything.

And when I say per project, it is one CAD flow being populated in one workspace, for as many project users, so that they all point to the same thing. It’s not one per workspace.

We’ve got some automation checks in place. We do regressions, we do checks, to see if anything is broken by the latest check-ins for example. We can do it on release branches, we can do it on development branches.

There is always room for improvement I guess, you can always do more regressions. But it makes a huge difference being able to find things very early, very quickly, when they are just being checked in that night, something falls over and you can immediately trace it back to something you’ve just done. So it’s very powerful. We don’t do it for our CAD flows, because the projects fix their CAD flows and when to upgrade – they could be 3 days from tape-out and it could be 5 months from tape-out. So we do it on a per project basis.


Nigel Foley is the Analog CAD Director at CSR