Raik Brinkmann
ONESPIN President and CEO
Raik Brinkmann
ONESPIN President and CEO
I’m fascinated with cars, and there is a very famous car maker museum that I visited recently
—
Where they gave a thorough rundown of the history of each of their cars from how they are designed, to the materials they used, to the evolution of energy efficiency. It wasn’t until the last exhibit where they talked about the future of automobiles and how AI, including machine learning, is the driving force behind the next several generations of cars and transportation in general. The exhibit went on to explain how each phase of autonomous learning will be exponentially more difficult than the one before. Each phase requires more significant “horse‐power” in terms of data processing, how that data gets connected to everything else, and the customisations “under the hood” to the processors and SoCs.
Looking around, I noticed that most people didn’t seem to fully grasp this significance. And I understand why. We live in a world where we often take for granted that our electronic devices work as we expect. Moreover, we often don’t think about whether devices are safe or secure. We don’t care about the complexities involved with making the chips that power our devices. We simply just want our devices to run and trust that they won’t have issues. For better or worse, we as consumers are justified in our attitude.
Companies designing our electronics have an obligation to meet functionality, safety, and security requirements or they are met with swift ramifications. We’ve all seen the headlines when something catastrophic happens, whether it be a plane crash, car crash, or a malicious hacker attack. Once that occurs, consumer trust is eroded and difficult to get back. Issues don’t have to be headline making to be devastating to a company’s bottom line. Turning out a product that has unknown performance or power issues, for example, can spell doom when detected in the field.
Meeting these requirements becomes tougher and tougher as innovation progresses, however. The SoCs that empower 5G, IoT, and AI that are at the crux of today’s innovation have to be increasingly complex, often with seismically bigger capacity and power, and the ability to be more flexible and customised. This up‐shift means that companies have to be hyper‐diligent not only about the functional correctness of their designs but also the safety, trust, and security. The design must operate as intended in even the most adverse environmental conditions and be immune to any unwarranted interference. To say it concisely, companies must be fully invested in the entire integrity of their IC – functional correctness, safety, trust, and security.
Take, for example, XL chips (multi‐billion gate chips) and heterogeneous computing platforms. The adoption of these has become a significant trend in order to implement today’s applications. XL chips contain millions of connections, over 60 million module instances, and over 30 thousand modules. Further, they contain multiple CPUs, programmable logic accelerators, and third‐party IP, and are software programmable. Add to all this the customisation that is needed for these environments to compete, and you can see that the complexity is astounding, and verification of these systems can be daunting.
Heterogeneous environments are being used more and more in such industries as automotive, aerospace, and industrial where functional safety and security seem to be critical additions to making sure the design functions correctly. Understanding the different types of faults and how they impact the design is paramount. Many of these designs must also comply with stringent safety standards such as ISO 26262 or DO‐254. Designers must contend with these requirements as well.
A complete verification that takes into account IC integrity standards starts with thorough verification planning that includes documenting the requirements of design and verification, including writing specific assertions. Getting to complete coverage is a challenge in any environment. You typically use multiple technologies to perform verification (e.g. simulation, emulation, and formal) but without the convergence of these technologies into a single view, understanding where the holes are in your coverage is extremely difficult.
The ability to verify that the millions of connections are working properly is of course critical. The use of simulation can only get you so far and can be inefficient even in less complex scenarios. What’s needed is an exhaustive verification approach technology that can handle the huge number of connections. At OneSpin, we worked closely with our customer Xilinx to address this challenge when others couldn’t. You can read more about it in a joint conference paper presented at DVCon.
Equivalence checking is another issue. In a heterogeneous environment, you must ensure that you have achieved equivalence of the RTL through ASIC/FPGA synthesis to the final netlist. Simulation and emulation can’t perform this function. This is a very specific task that can only be achieved with unique formal solutions.
If we take a closer look at artificial intelligence (AI) chips that power machine learning (ML) and deep learning (DL) applications, most include floating‐point units (FPUs). Algorithms used in neural networks often use multiplication and addition of floating‐point values. In particular, convolutional neural networks (CNNs), popular for computer vision applications, may involve deeply nested loops of floating‐point operations, which subsequently need to be scaled to different sizes in order to meet the precision and area requirements. There is significant value in the use of FPUs, but it means increased importance must be placed on thoroughly verifying floating‐point hardware including ensuring the complex IEEE 754 floating‐point standard is being met. Once again, simulation falls short in this capacity. At OneSpin, we’ve developed a formal technology that can achieve 100% coverage and compliance to the standard. You can download our case study on this topic.
When it comes to safety, you need to start verification analysis by doing some FMEDA (failure modes, effects and diagnostic analysis) to map out the different types of faults that will have a negative impact on the function of the design. You need a plan on how to cover these faults and you need to accurately measure how well you’ve done. With regards to complying with stringent safety standards, using verification solutions that are already safety certified and working with a verification partner to guide you through the certification process can shave a significant amount of time and effort.
Security and trust can be approached using advanced formal techniques in a similar manner as verifying functional correctness. But to verify if a design is secure and trusted, you must prove the absence of additional design logic and the absence of modifications to the design flow. You must determine that the design is fully compliant with its outlined specification and certification.
At OneSpin, we’re committed to helping our customers deliver worry‐free electronics ‐ devices that work as intended and free of safety and security concerns. To do that we are committed to assuring that IC integrity standards are being met when developing functionally correct, safe, trusted and secure designs. We work closely with our customers to make sure that we’re providing solutions that fully verify the complex dimensions if IC integrity. Being able to meet the worry‐free expectation that the world demands is crucial for continued innovation.