Dean Drako on Big Data at DAC 2014

Multi-Site Design Panel Opening Remarks (DAC 2014 edited transcript)

One of the things we are working on at IC Manage is the exploitation of the data in the EDA and SoC/IC design realm. I’m going to talk a little bit about that and then we’ll jump into the panel discussions.

You’ve all probably heard a bit about big data in the news and the newspapers and how it’s affecting different areas.


Some interesting big data statistics.

90% of all of the data that exists in the world today was created in the past two years.  In those two years, there has been a tremendous shift in the world, where we are starting to keep, record, catalog, and organize a lot more of the data that goes flying through primarily the internet.

That data explosion has caused a whole development of what is called big data.  Gartner actually predicts by 2015, there will be over 4.4 million jobs alone that will be created in handling and managing big data.

The companies that are doing the big data analytics are two times more likely to have top quartile performance and five times more likely to make decisions faster than their competitors. These are two interesting facts that I think are going come to bear in the chip design, semiconductor/SOC design world.

What is big data analytics?

If you kind of take a look at the academic definitions or the familiar definitions it really boils down to four things:

  1. Capture massive amounts of data being generated. In the EDA world that means capturing all the simulation run, every tool run, everything that you do.
  1. Put it into a very accessible storage system. That’s not tape. That’s basically a disk drive system that allows you to get access into the data, index it, and do things with it.
  1. The third step is to be able to extract things from it, organize it, and index it.
  1. And finally, to do analytics on it and visualization, so that you can learn lessons from it.

As you all know, these applications are changing a lot of things in the world. In the biomedical and disease world, we’re getting new insights as we can get the data and analyze it.

Every internet site of significant size is doing a tremendous amount of work to understand the traffic patterns of the people coming and visiting those sites. The shopping sites are using it, to predict, operate better, and sell more.

Vinod Khosla, one of the most famous VC’s in Silicon Valley, came out with a position that basically says big data analytics is going to be better than doctors.

There’s truth in that. You know, if you can actually look at the statistics over the six billion people on the planet and start to analyze that, you are probably going to get better insight than an individual doctor who has looked at a few hundred patients can give you. One of my favorite big data application examples is a startup that is tackling the weather problem.

So conventional weather modeling usually allows you to predict around ten days using sophisticated computer-based techniques.

There’s a new startup that took sixty years of weather data, which is a tremendous amount of data, and basically started crunching on it. And now, they’re actually able to predict the weather out forty days in advance with over 70% percent accuracy.

That’s the kind of thing that you can start to do with big data. I think it’s going to change almost every industry on Earth – it’s already started to do that. And I don’t think that, EDA, or semiconductor design/SoC design is going to be any different.

Where can we get the most benefit from big data in SoC and semiconductor design?

In today’s chip development, it takes a long time to design a chip. And it costs a lot of money because of the manpower, time and energy that goes into it. There are a lot of things that we could optimize – basically elements that could come into the semiconductor design. Everything from allocating resources, to assigning and monitoring individual engineering tasks, to identifying yield defects and yield optimization.


In the prior DAC panel, we had a short discussion about how one particular company used big data analytics to actually locate yield defects and do something about it.

Verification suite coverage, all kinds of applications… I’m not going to go through every one of these, but you get an idea of a number of places where we could apply big analytics to basically get better designs, more quickly.

How do you prepare for Big Data?

We are just figuring out how to use these new analytical tools and these new big data methods, to analyze the data in the EDA. What do you do now?

The big data analytics don’t work, if you don’t have any data. So you want to actually start to save some of the data off now, at a reasonable cost, and a reasonable methodology.

We’re working on big data analytics technology at IC Manage. It’s early, and we don’t know exactly how it’s going to play out, but we think there is going to be tremendous value in it.

There are two things that I can share with you today:

First, you want to save your design and verification results data. Okay, your simulation data, your synthesis run data, whatever you’ve run or created, you really want to save all of those result files – the files that have traditionally gone into the slash temp directory and that you get rid of pretty quickly because they’re big and kind of painful to keep around.

But you don’t want to just keep them and throw them into a big data dumpster – as someone coined the term earlier today.  You actually want to save them with context.

Because when you start to do these big data analytics, you need to know what change number was this design in, in what portion of the design was this simulation run, and what part of the hierarchy was this simulation attempting to attack when you saved that data.

So that six months, a year, two years from now, when you actually try and go and run the big data analytics and see what your trends are so you can optimize your engineering effort. You can say, “This happened then, so we need to change it in this new design and do it this way.” You can actually understand and draw those correlations.

There are a number of companies, two actually, who are actually starting to save this data and already starting to running analytics on what they’re doing, so that they can better optimize yield, so that they can better optimize how they’re allocating engineers, so that they can figure out how to get the designs to tapeout more quickly.

And if you look at it, the big players, the winners, are going to be doing this type of stuff in the long run. 

The data comes before the analytics.

If you don’t have data, then you aren’t going to be able to run any analytics. So the point is start capturing the data in context today, get it into a storage system that’s accessible and extractable.

And then, relatively soon down the road, we’re going to have a way to get analytics on it and actually extract really useful information so that you can be better, faster, and stronger, in getting chips out the door.


Dean Drako is President and CEO of IC Manage.