Julian Lonsdale
FirstEDA
Julian Lonsdale
FirstEDA
As with so many EDA products that are now in mainstream use in FPGA design & verification, their early roots were in the ASIC flow; when there was no sensible alternative (if you valued your job!) to getting it right first time
—
Design Rule Checking (DRC) has been a slow burner in this regard but the momentum is now building as a combination of factors underline the common sense of applying coding guidelines automatically as early as possible in the flow.
The continual innovation in FPGA technology, with increasing gate counts, rich IP availability and inherent flexibility, has provided a platform for a major shift in hardware R&D practices. One figure that stuck in my mind, from an Avionics customer, was a 10 to 70% increase in just 10 years in the proportion of programmable logic used to implement the core hardware functions (this was in a cockpit display unit but I believe this to be typical in this market). Today, FPGAs proliferate in all market segments ranging from automotive to medical, space to nuclear, in addition to avionics.
A common theme in a lot of embedded applications is safety critical. This is pretty clear cut in avionics but also increasingly so in automotive (read the news!), medical and power generation. In these industries, certification is an ever-present pressure and ultimately a barrier to product delivery. Whilst this is an overarching requirement, as more of the functionality is destined for the FPGAs, then the focus turns increasingly on this part of the flow. This is compounded by the inherent (re)programmability which often turns into more of a curse than a blessing when it comes to certification. The question of whether the FPGA methodology should be considered as hardware or software (firmware) is a technical paper topic in itself, so we will just make passing mention of it here. What is clear though is that if you have adopted a Hardware Description Language (HDL) approach, be it VHDL or Verilog/SystemVerilog, then you need to ensure that your designs are free from ‘known’ issues. This is where DRC comes into its own. Before we look in detail at the solution, it’s worth reminding ourselves of where the rules themselves originate.
Anyone who’s employed an HDL (or RTL) flow to target hardware will be aware of some of the fundamental pitfalls & gotchas that can give rise to a wide range of issues such as device variability, performance or functional bugs. In addition, there are systemic problems that can occur due to shortcomings in the tool flow itself. In the first instance, there is no substitute to proper training and an apprenticeship to hone your skills through experience (this is why at FirstEDA we set such stall by offering our customers the best in language & methodology workshops). Beyond that, the nature of HDLs is that you still need automation, wherever practical, to scale your workload and deliver increased productivity (and quality – another topic for another day!).
At a basic level, HDLs are structured and developed as per your standard software flow. Where they diverge is in the implementation phase, where HDLs are run through synthesis and place & route (with associated point tools in the back-end suite) to deliver a physical device. Where the parallels exists, it follows that good software practices will benefit both disciplines and provide a welcome correlation in platforms and processes. Again, for safety critical applications, the software engineering aspects are well managed with recognised standards offering a comprehensive set of design rules and coding guidelines (e.g. DO-178C, IEC 61508). A well-constructed set of design rules will benefit both the designer and the verification task, pre-empting issues before they cause costly downstream failures. Such rules will typically be assigned a criticality level, ranging from warnings – noted but acceptable – to critical – must be remedied. In the hardware (HDL) world we’re again lagging (more ammunition I’m afraid for those competitive software engineers!)
The challenge is that while most companies have in-house HDL design rules for the hardware engineers, often developed organically from experience, there is the not the same industry consensus as with software. Unlike with ADA or C, companies are left to their own devices (no pun intended!) to provide a cohesive policy for the hardware team. Don’t get me wrong, there’s plenty of valuable assets available to guide this process (e.g. “Common Mistakes in VHDL” by R. Manion; https://class.ece.iastate.edu/cpre583/ref/VHDL/Common_VHDL_mistakes.pdf) but the fact remains, you’re more or less charting your own course.
A qualified fallback is to take a leaf out of the Japanese STARC (Semiconductor Technology Academic Research Center) guide for ASIC and SoC design. The ruleset they created represents the collective knowledge of 11 major Japanese corporations and, while some of these rules don’t apply to FPGAs, the vast majority are equally valid. We can break these relevant rules into 3 main categories:
This class of rules cover seemingly innocuous issues such as naming conventions and capitalisation; such conventions can help readability and maintainability of all software languages. This doesn’t directly affect the dependability or functionality of the design, but could indirectly if the code is difficult to alter. These types of checks can be readily automated using DRC software, however rules which require signals and variable to have meaningful names may be harder to check automatically and will often require the attention of a human reviewer.
As with any code, adequate comments should be incorporated. In our HDL files these will typically consist of a standardised file header, giving details of the design, as well as comments for every process/initial block or significant algorithm. While the presence of such comments can be checked, confirming their accuracy and relevance will again require human intervention.
These rules relate more to the specific HDL being used; for example in VHDL, ensuring that deprecated libraries are not used. Alternatively in Verilog, the avoidance of non-blocking assignments in functional statements. While your design may solely be in either VHDL or Verilog, it is also good practice not to use the keywords of the other language. This will ensure fewer issues if your design needs to call third-party Intellectual Property (IP) described in the other HDL language.
These rules cover architectural design considerations such as Finite State Machine (FSM) encoding techniques, output module registration or constraints on the level of computation (e.g. in a for loop). For example, when coding FSMs they should always have illegal state traps and routes from these back to a known good state. Code within a for loop statement is usually copied for the number of specified loops and, as a result, the circuit area increases. Therefore it is wise to avoid any logical, arithmetic, or relational operations within a for loop statement and place common statements outside the loop. This approach should improve the performance of the synthesised design.
Hopefully this sets the scene for the wealth of insights that can be applied to the management of the HDL-based hardware development both to ensure design integrity and, more importantly, secure compliance.
So, we have our coding guidelines nicely packaged in a Microsoft Office document (!). How do we now put these to work?
As we have discussed, a typical set of HDL design rules will contain a large number of mundane (but still important) checks e.g. naming conventions, keyword restrictions, module output integrity. This class of rule is an ideal candidate for automation.
ALINT is the DRC solution family from Aldec. It’s architected around rule decks (e.g. STARC, RMM) and provides the flexibility to group and parametrise the required rules into a ‘policy’. In conjunction with detailed descriptions on the scope of each rule, integrated tightly into the tool, it provides a high integrity way to analyse a design and generate fully documented results (i.e. everything is available for independent or offline review). The software can be run as an automated batch job or interactively, where the GUI provides the engineer with additional features for digging into the design.
Sophisticated DRC tools, like ALINT, efficiently identify coding style, functional, and structural problems in Verilog, VHDL, and mixed-language designs. Because they can be deployed early in the design process, they prevent issues from spreading into the downstream stages of your flow. Using sophisticated static analysis techniques, DRC tools uncover a variety of hidden problems at the most efficient time to be remedied, thus reducing the risk of redundant design iterations and costly re-spins.
Another major advantage of this class of DRC solution is that they not only analyse your files in isolation, so-called compile-time checks, but they can also run your design as a whole, termed elaboration-time checking. A typical elaboration time-check would be confirmation that the top-level of the design hierarchy only contains the following types of blocks: clock generation module, reset generation module, RAM, ROM, I/O cells and an RTL description of the top hierarchy. These types of checks can only be performed by a tool that understands the complete design hierarchy.
The value here to the individual engineer should be clear: using an efficient DRC tool as an integral part of their daily workflow will improve productivity and result in higher quality, portable HDL. This in itself should warrant further investigation but, if we come back full circle to the market-specific compliance aspect, there’s currently a much bigger driver for adoption.
When should design rules be applied? The correct answer being ‘as the code is written’. So perhaps the question should be, when should actual compliance to the mandated design rules be checked? The accepted engineering practice is to verify conformance at the design review stage. The following is an extract from DO-254 (Design Assurance Guidance for Airborne Electronic Hardware):
Section 6.3.3.2, Design Review
A design review is a method to determine that the design data and implementation satisfy the requirements. Design reviews should be performed as defined in the plan at multiple times during the hardware design life cycle. Examples are conceptual design, detail design and implementation reviews.
Design review is usually a major project milestone but leaving the checking of code against design rules to this late stage is undesirable, and can adversely impact project timescales.
Here’s another nugget I picked up, again from an Avionics customer: on average our design engineers will spend 3 months actually designing and then 6 months reviewing their design. Just think about that; aside from the mind-numbing reality, how do you keep everyone focussed on the job at hand? (As well as ensuring you retain their services for a second design!)
Deploying an automated DRC solution, both for the design and verification engineers, which can be accessed early in the design flow and then leveraged during reviews (peer and formal) has become a necessity. This has been borne out by a number of our customers who now enjoy the benefits and where some measure of sanity has returned.
We’ve discussed the primary drivers for the adoption of an automated DRC solution. With a similar ASIC pedigree to so much EDA technology, it is now coming into its own in the FPGA/Embedded space due to increasing design complexity, the need to scale development efficiently and the importance of identifying issues early. In doing so, the downstream process improves, the quality of the code increases and engineers get to focus on the core code functionality, especially at review time. In addition, don’t overlook the benefit of automated documentation: comprehensive reports of the rules run, what they check and the actual results.
This isn’t just a boon for safety-critical applications, all R&D activity can benefit from applying a common sense approach to managing your design assets and resources.
In closing, I have a third titbit to share: one particular customer (yes, Avionics again!) has the challenge of ramping a firmware (read ‘FPGA’) team up from 1 to double figures, as soon as their (existing) client commits to a delivery date for the next flight trial. In reality this means a lot of ‘green’ contractors descending in a very short period of time, the challenge being how to get them productive as quickly as possible. DRC, in concert with good design practices, provides an automated framework to quickly guide coding development and at least provide you with a fighting chance of hitting the (always) unrealistic deadline.