veriheader

It is often said that a design is only as good as the quality of its verification plan. After a few years in the industry, I could not agree more: A robust verification infrastructure can identify critical issues and obscure corner cases efficiently, allowing designers to address them as part of the scheduled design cycle. Instead, a ragged approach to verification won’t identify outstanding bugs until late, if ever. This can cause delays in the shipping of the product and, in the worst case scenario, millions of dollars in losses.

A couple of days ago, leafing through the pages of OpenSPARC Internals I stumbled upon a very interesting flow chart that summarizes how a good verification strategy must be carried out:

verificationsource: OpenSPARC Internals (ISBN 0557019745), by David Weaver

The flow chart offers a simple, and yet meaningful, overview of the verification cycle. There are three points that, for me, stand out:

  1. The verification strategy derives from the Design Specification. Also, that the specification is prior to the implementation. Starting to implement before, or in parallel to, the specification of the system is out of the question. In fact, the chart has loops everywhere, iterating over all the steps except over the specification. Indeed, only through a complete and detailed specification it is possible to develop an robust verification plan.
  2. There are multiple methodologies for verification. Formal, simulation, emulation… All of them have different properties, strengths and limitations. And they all must co-exist in the verification flow in order to have full confidence in the implementation. It is up to the verification engineer to understand the pros and cons of the different methodologies and to know where and how to apply each of them in order to make the most of them. Trying to verify a full multi-threaded processor through formal methods is impossible, and setting a full UVM framework to test a FIFO is a total overkill. Now, the other way round…
  3. The quality of the verification plan is measured in terms of the coverage. And just like the verification methodologies, there are different types of coverage, all of which require our attention: Line, condition, toggle and functional coverage are different metrics to determine how thorough and effective our verification effort has been so far. It is interesting how it is sometimes possible to find bugs in our RTL just by looking into the line coverage values from the last nightly regression run and seeing a coverage hole where there should not be one, even when the tests themselves didn’t find an error.

Verification is a complex, exhausting and often frustrating part of the design cycle of any digital system. But also, it is of vital importance to guarantee the system is correct. It doesn’t matter how many years of experience one has in RTL design, bugs are inevitable. Therefore, a robust verification plan to identify those errors and help the designer fixing them is key for a project to be successful. So never underestimate your verification plan because, as this presentation by Bob Bentley (Intel) says, “RTL code complete” simply means:

“All bugs coded for the first time!”

Good luck with your verification.

The Verification Strategy Flow Chart
Send to Kindle
Tagged on:     

Leave a Reply

Your email address will not be published. Required fields are marked *