In the prior DAC panel, we had a short discussion about how one particular company used big data analytics to actually locate yield defects and do something about it.
Verification suite coverage, all kinds of applications… I’m not going to go through every one of these, but you get an idea of a number of places where we could apply big analytics to basically get better designs, more quickly.
How do you prepare for Big Data?
We are just figuring out how to use these new analytical tools and these new big data methods, to analyze the data in the EDA. What do you do now?
The big data analytics don’t work, if you don’t have any data. So you want to actually start to save some of the data off now, at a reasonable cost, and a reasonable methodology.
We’re working on big data analytics technology at IC Manage. It’s early, and we don’t know exactly how it’s going to play out, but we think there is going to be tremendous value in it.
There are two things that I can share with you today:
First, you want to save your design and verification results data. Okay, your simulation data, your synthesis run data, whatever you’ve run or created, you really want to save all of those result files – the files that have traditionally gone into the slash temp directory and that you get rid of pretty quickly because they’re big and kind of painful to keep around.
But you don’t want to just keep them and throw them into a big data dumpster – as someone coined the term earlier today. You actually want to save them with context.
Because when you start to do these big data analytics, you need to know what change number was this design in, in what portion of the design was this simulation run, and what part of the hierarchy was this simulation attempting to attack when you saved that data.
So that six months, a year, two years from now, when you actually try and go and run the big data analytics and see what your trends are so you can optimize your engineering effort. You can say, “This happened then, so we need to change it in this new design and do it this way.” You can actually understand and draw those correlations.
There are a number of companies, two actually, who are actually starting to save this data and already starting to running analytics on what they’re doing, so that they can better optimize yield, so that they can better optimize how they’re allocating engineers, so that they can figure out how to get the designs to tapeout more quickly.
And if you look at it, the big players, the winners, are going to be doing this type of stuff in the long run.
The data comes before the analytics.
If you don’t have data, then you aren’t going to be able to run any analytics. So the point is start capturing the data in context today, get it into a storage system that’s accessible and extractable.
And then, relatively soon down the road, we’re going to have a way to get analytics on it and actually extract really useful information so that you can be better, faster, and stronger, in getting chips out the door.