2. Challenge: Existing verification progress metrics analytics not scaling with data explosion
Existing methods for tracking, summarizing, and analyzing verification simulations has several limitations.
2.1 Current Analytics do not scale with increasing simulations
Verification team can run hundreds of thousands of simulations per week. Trying to recognize sub-events can mean managing as many as 10 billion events in a year. This number can be expected to double over the next couple of years for some companies.
Some companies have developed in-house systems, with the goal to derive verification progress metrics for tools from multiple vendors. However, the traditional relational database structures are not scaling with their increasing simulations.
2.2 Slow analytics query speed, Limited analysis
Poor overall system performance results in a low query speed. When companies experience substantial system degradation due continuous queried for each user request and for each data record insert, the system may not even be able to render the data to the user for more than a few thousand simulations.
Companies can often only execute basic analysis of regressions, debug and coverage. Key additional analytics insights can be missing, for example recognizing sub events within a single simulation, or identifying simulation runs grouped in a variety of user and organizationally defined containers.
2.3 High overhead, Multi-vendor support
For companies trying to support two or more different vendor simulators, verification environments and tests, the ongoing effort and debug required for scripts and updates is high. Traditional architectures can also create duplication and not allow record updates. Reducing the overall data footprint and improving performance can require scheduled data deletions.