After years of innovation in verification of increasingly complex should we now turn our attention to the design process itself? Since starting in verification in the early 90’s I have witnessed the introduction of code and functional coverage, constrained random, assertions, numerous metrics, formal verification, various languages and methodologies and more recently Requirements Driven Verification. However, we have never considered in detail how to reduce the number of bugs that get introduced into the designs in the first place and ways to make them easier to find. In this blog we consider a variety of “Design for Verification” (DfV) techniques that are aimed at improving the design designs that contain less bugs and are easier to verify.
Assertion-based-verification is well established with assertions potentially our highest return on investment (ROI) in verification: they add additional checks to help detect failures and then help reduce debug time to root cause the failure. However, we can increase that ROI even further by assertion-driven design. As the industry has moved to specialised verification techniques owned by specialist verification engineers a “design bring-up gap” has opened where designers write RTL but do not have a simulation environment. Many designers close the gap with their own test bench that eventually gets thrown away. However, if designers wrote assertions during RTL development then formal tools could enable them to generate the simulations they need. Also, during RTL developers make many assumptions that are at best captured in comments or at worst forgotten. However, by writing properties can they can capture them as executable specifications that can eventually become assumptions or assertions. This creates a training requirement but too many training courses focus on the assertion language rather than the assertions that add most value.
Figure 1: assertion-driven design
The problem with corner cases is that they are, by definition, hard to hit. Queues and FIFOs have to be full, counters have to roll over, buses have to be busy, external events occur simultaneously, etc. So can we make those conditions easier to hit? The external conditions (e.g. busy buses, simultaneous events) are controlled by the test bench but internal conditions could be made easier to hit by reducing the size of a queue, FIFO or counter. This needs to be considered up front, for example through parameters, as changing design size might break other aspects of the design and cause false bugs. Some argue that using such “design mutations” is not valid for coverage closure but would be happy to use them for soak testing.
Some other DfV techniques are more obvious but still not followed. Endless research shows that the cheapest way to find bugs is code reviews but take up remains stubbornly low. This may be due to the image of engineers sat in a room poring through code printouts. However, automatic lint checkers can easily check for compliance to coding rules and also generate code quality measures such as complexity metrics. This should leave reviewers to focus on higher level considerations such as algorithmic or architecture implementation decisions.
As our SoCs have moved to multiple cores with complex shared memory systems the scope for architectural design errors has increased. However, there is a lack of languages for capturing architectures in a way that can be analysed for issues such as quality of service, starvation, livelock and deadlock. Trying to find such issues through RTL simulations can be virtually impossible. For example, a livelock condition may occur only when processes follow a particular repeated pattern to continually request resources which might be hard hit in a constrained random scenario. Formal verification might be able to detect it if the design size is not too large and a full set of constraints can be written.
So whilst the separation of design and verification teams has improved design quality has it introduced some inefficiency? And as design complexity increases will the verification teams be able to cope without better design collaboration? Techniques such assertion-driven design and design mutation described above are good examples of collaboration and also improved code review processes will capture bugs more easily and earlier where they cheaper to fix. But there is also a need for ways to capture architectural designs for improved analysis. Bryan Dickman, Director Design Assurance at ARM, discussed Design for Verification in his talk “Can we train our designers to avoid bugs?” at Verification Futures in the UK on February 5th 2015. Slides are available here.