Monday, January 7, 2013

Focus on good power-aware verification strategy for SoCs: Dr. Wally Rhines

It is always a pleasure to chat with Dr. Wally (Walden C.) Rhines, chairman and CEO, of Mentor Graphics. I chatted with him, trying to understand gigascale design, verification trends, strategy for power-aware verification, SERDES design challenges, migrating to 3D FinFET transistors, and Moore's Law getting to be "Moore Stress"!

Chip design in gigascale, hertz, complex
First, I asked him to elaborate on how implementation of chip design will evolve, with respect to gigascale design, gigahertz and gigacomplex geometries.

He said: "Thanks to close co-operation among members of the foundry ecosystem, as well as cooperation between IDMs and their suppliers, serious development of design methods and software tools is running two to three generations ahead of volume manufacturing capability. For most applications, “Gigascale” power dissipation is a bigger challenge than managing the complexity but “system-level” power optimization tools will continue to allow rapid progress. Thermal analysis is becoming part of the designer’s toolkit."

Functional verification is continually challenged by complexity but there have been, and continue to be, many orders of magnitude improvement in performance just from adoption of emulation, intelligent test benches and formal methods so this will not be a major limitation.

The complexity of new physical design problems will, however, be very challenging. Design problems ranging from basic ESD analysis, made more complex due to multiple power domains, to EMI, electromigration and intra-die variability are now being addressed with new design approaches. Fortunately, programmable electrical rule checking is being widely adopted and will help to minimize the impact of these physical effects.

Is verification keeping up?
How is the innovation in verification keeping up with trends?

Dr. Rhines added that over the past decade, microprocessor clock speeds have leveled out at 3 to 4 GHz and server performance improvement has come mostly from multi-core architectures. Although some innovative approaches have allowed simulators to gain some advantage from multi-core architectures, the speed of simulators hasn’t kept up with the growing complexity of leading edge chips.

Emulators have more than made up the difference. Emulators offer more than four orders of magnitude faster performance than simulators and emulators do so at about 0.005X the cost per cycle of simulation. The cost of power per year is more than one third the cost of hardware in a large simulation farm today, while emulation offers a 12X savings in power per verification clock cycle. For those who design really complex chips, a combination of emulation and simulation, along with formal methods and intelligent test benches, has become standard.

At the block and subsystem level, high level synthesis is enabling the next move up in design and verification abstraction. Since verification complexity grows at about the square of component count, we have plenty of room to handle larger chips by taking advantage of the four orders of magnitude improvement through emulation plus another three or four orders of magnitude through formal verification techniques, two to three orders of magnitude from intelligent test benches and three orders of magnitude from higher levels of abstraction.

By applying multiple engines and multiple abstraction levels to the challenge of verifying chips, the pressure is on to integrate the flow. Easily transitioning and reusing verification efforts from every level—including tests and coverage models, from high level models to RTL and from simulation to emulation—is being enabled through more powerful and adaptable verification IP and high level, graph-based test specification capabilities. These are keys to driving verification reuse to match the level of design reuse.

Powerful verification management solutions enable the collection of coverage information from all engines and abstraction levels, tracking progress against functional specifications and verification plans. Combining verification cycle productivity growth from emulation, formal, simulation and intelligent testing with higher verification abstraction, re-use and process management provides a path forward to economically verifying even the largest, most complex chips on time and within budget.

Good power-aware verification strategy for SoCs
What should be a good power-aware verification strategy for SoCs

According to him, the most important guideline is to start power-aware design at the highest possible level of system description. The opportunity to reduce system power is typically an order of magnitude greater at the system level than at the RTL level. For most chips today, that means at least the transaction level when the design is still described in C++ or SystemC.

Significant experience and effort should then be invested at the RTL level using synthesis and UPF-enabled simulation. Verification solutions typically automate the generation of correctness checks for power-control sequences and power-state coverage metrics. As SoC power is typically managed by software, the value of a hardware/software co-verification and co-debug solution in simulation and emulation becomes apparent in power-management verification at this level.

As designers proceed to the gate and transistor level, accuracy of power estimation improves. That is why gate level analysis and verification of the fully implemented power management architecture is important. Finally, at the physical layout, designers traditionally were stuck with whatever power budget was passed down to them. Now,they increasingly have power goals that can be achieved using dozens of physical design techniques that are built into the place and route tools.

Key SERDES design challenges
What would be the key SERDES design challenges and what are their implications on PCBs?

Dr. Rhines noted: "Accurate modeling and analysis of transceivers and the channel between them is critical to SERDES design. The days of conservative rule-of-thumb estimations are behind us. Simulation speed has also become critical to effectively model interconnect in 3D, tune complex transceivers, and quickly converge on bit error rate validations that could otherwise consume a lifetime."

Accurate modeling for SERDES requires a complete model that includes the IC package, PCB and connectors, and that comprehends second order effects such as variations in the interconnect. Achieving bit error rates of one for every trillion bits also requires well characterized design of transceivers. SERDES design is a design challenge that brings together the best in modeling, system design and designer expertise.

In that case, what's more important: innovation in designing with IP vs. innovation in prototyping, and why?

He added that designers historically achieved value by selling their designs as components, either as fabless or IDM semiconductor companies. Increasingly today, the market for designers is through the sale of IP blocks, with the leader being ARM, but thousands of independent design innovators participate as well.

"With time, I expect this to evolve in much the same way as the automotive industry evolved, i.e. independent automotive component manufacturers (of spark plugs, car seats, etc.) were acquired by the big automotive system integrators (referred to as OEMs) to achieve vertical integration and economies of scale," Dr. Rhines said.

Eventually, the vertical integration model became less efficient since tier 1 suppliers could serve a broad market of many automotive OEMs and achieve their own economies of scale. Similarly, it is expected to see more and more 'ARM-like' IP creators successfully innovating new designs and marketing their IP products to “system integrators” who sell chips.

Migration to 3D FinFET transistors
How is the semiconductor industry preparing to migrate to 3D FinFET transistors while continuing to push the envelope with scaling to smaller process geometries?

According to Dr. Rhines, a very reasonable approach has emerged. Each of the foundries is using elements of its 20nm design rules and simply adding a FinFET layer without any shrink of the basic design rules. For most, the FinFET version follows the initial 20nm process by about a year. Interestingly, the FinFET version of the 20nm design rules is being referred to as “16nm” or “14nm” depending upon the foundry. In general, the IDM approach to FinFET differs from that of the foundries.

And, how will FinFET capability in the near future help EDA?

Every new technology offers new opportunities for EDA competitors to provide more value. FinFET is no exception. Taking full advantage of the FinFET layer requires modeling, verification and design aids that are different from traditional design.

Moore's Law to Moore Stress!
Finally, is Moore's Law finally getting to be "Moore Stress"?

Dr. Rhines concluded: "As predicted for decades, Moore’s Law is becoming increasingly less relevant to achieving the ongoing reduction in the cost per transistor. Moore’s Law is a special case of the learning curve where most of the cost reduction comes from shrinking feature sizes and growing wafer diameters; it has dominated the learning curve for forty years.

"As we go forward, the burden of reduction in cost per transistor will be shared by other innovations like 3D stacking, package cost reduction and system design innovations."

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.