IEEE/DATC Electronic Design Processes 2000 Workshop - Summary


April 26-28, 2000
http://www.eda.org/edps/edp00.html
Monterey Beach Hotel, Monterey, CA
Program and presentations

Overall impressions of the workshop

Pluses Minuses
  • Good participation/level of interaction from attendees
  • Duration of talks (45 minutes, extended from 30 minutes last year)
  • Quality/content of presentations
  • Location: nice setting, close enough to silicon valley to get many attendees
  • Small number of attendees (26 registrants, about 12-18 present at any one time)
  • Speakers staying for single morning/afternoon
  • No participation from academia
  • Focus was a little narrow (emphasis on chip implementation)
  • Location: maybe a little too close enough to silicon valley; several attendees not staying for entire workshop

Suggestions for next year:


Interoperability Report from EDP 2000

To be presented by Bob Stear at the DAC interoperability workshop, June 4.

Position statement

Interacting design constraints and new technology and design features require integrated incremental design tools. Leading edge design shops need the flexibility to build solutions from multiple vendors and internal tools. But providing interoperability of tightly integrated tools is much harder than providing file integration of point tools, and requires architectures for tool interaction and a focus on the types of methodology supported. CAD vendors are developing and enhancing closed suites of incremental design tools, and have little incentive to provide interoperability. There is a limited window of opportunity to establish an open environment for incremental integrated DA tools before the costs to catch up to and migrate closed vendor environments becomes prohibitive.


Workshop notes

The rest of these notes are mostly from Naresh Sehgal - thanks Naresh.

Keynote - Ward Vercruysse, Sun Microsystems

Ward talked about managing complexity. UltraSparc-III characteristics include 600 MHz (750 now), 70-w power with 6 layers of metal. The chip has 23 M X-tors (4M logic), with 800 lib cells and 150 megacells, and 1.8M lines of Verilog code. Diagnostics lines of code are 106M. Number of specialized tools increases to 20 in front end, 45 in back end, with 200 in-house tools (40 flows), 2 different parasitic extractors and 2 different timing models (static and dynamic). Tool run-time is on an average increased by an order of magnitude, and there is a need for higher tool reliability (data is correct and the flow completes), since both the numbers of tools and of iterations are increasing. DB size has grown to be 300 G, with 130-k files and total disk storage of 12 Tb. The layout contains 0.5 B polygons, unique 17000 cells from 124 different libraries. The library size is decreasing, having peaked in 1998. The growth infrastructure has grown from 1400 CPUs (in 1995) to 1500 CPUs (in 1998). Logic simulation cycles in the same time period grew from 2 B total to be 2 B/week. The target for skew in budget is 100 ps, using clock grid and shielded wires everywhere. The clock alone takes 25 w out of the total 70w power. TI fabricated the chip. The same path could fail for maxtime in the slow process corner and fail for mintime in the fast process corner. Trying to shorten the design cycle and make it more predictable: Success and failure is judged by the design cycle. In house DP placement tool, using CCT for routing, and Simplex for extraction. Use late binding, with search path for design and tools, with version control separate from data and tools. Ensure that all the views of design are same to all the tools, e.g., using the same search path. Solaris/NFS does the job, flow is batch oriented with a script (poor man's repeatability and incrementality). Looking in future, need is for new tools with MP/MT and 64 bits, layout-aware circuit analysis tools, productivity enhancements for circuit and mask designers. Use hierarchy but with context, hence still have large data volumes. Need tools to deal with imperfect data (e.g., extraction before layout is all done). A lot of things are being treated as special purpose (e.g., data path vs. random logic synthesis) instead of merging them together as a generic case.

Impact of the Internet on e-Cad: A Field Survey - Naresh Sehgal, Intel

Naresh started by describing the various internet business models, including C2C, B2C, and B2B. Some trends in internet business include more pay-per-use for applications, and the advent of ASPs (application service providers) to provide applications, storage, and processing. He feels that CAD will inevitably follow this trend, with tools and designs shared through the internet. Tools in the future will have to be web-enabled and designed for distributed operation (multi-tasking/multi-processing). Naresh describes views of T2T (tool to tool), D2T (designer to tool) and D2D (designer to designer) interactions. He cites work on SAGA in the UMLe environment (presented last year) by Jose Lima which uses genetic algorithms (GA) to optimize the parameters controlling an simulated annealing (SA) optimizer for the two-way partitioning problem.

A Client-Server based Architecture for Parallel Query - Bruce Winter and David Hathaway, IBM

Environment: Placement, clock and timing optimization, wiring and post-PD checking. Team of 6 people doing 30 chips/yr. Sharing data is a problem with a million objects, 500 Mg netlist and 700 Mg timing extraction, and 1000 Mg wire files (takes 30 minutes to load the model requiring a large box, 64 bit, > 2 Gig Memory). Need quick solutions to dynamic problems. By loading the model once and allowing interactive queries from multiple clients, the need for large machines is reduced, and the load time seen by individual designers is removed.

HDL Interoperability and IP-based system verification - Dennis Brophy, Mentor Graphics

Dennis is the chair of open source VHDL and Verilog, says that mixed HDL is already in use by 16% of design teams. Expected to grow by 30% in the next year. Model-Sim gives an ability to mix different languages during simulation and verification (with a modular design). Design drives verification, so the latter must embrace all the languages being used by the design teams. Testbench re-use drives multiple language usage, and there is an industry standards groups to address design language interoperability issues. For participation in the integration of OVI(Open Verilog International) and VI(VHDL International) send an email to dennisb@model.com.

A System Simulation Framework - Peter van den Hamer, et al., Philips

Peter described an environment for rapid exploration of complex heterogeneous design spaces. A (relatively) detailed model is used to obtain measurements of interest from a number of points in the design space. Interpolation in a non-linear model is then used to predict behavior of the system at other points. Allows real-time "what-if" exploration. Can also compute and display response surface of system. Models for sub-systems can be built and composed to allow investigation of behavior of entire system.

ECIX: Electronic Component Information Exchange - Donald Cottrell, Si2 (Silicon Integration Initiative)

This began focused on PCB components (passives), and has evolved to include SoC components. XML DTD based information exchange network for quick data interchange between a consumer and a supplier (a.k.a. ECIX quickdata). He described the dictionary driven extensibility, with support for query based on the property/value pairs. The value can be an expression or a range. There is a registry to identify participants, identify roles, domains of use and to bind participants to use models/dictionary etc. The goal is to get information to the users: what they need, when and where they need it. RosettaNet is a broader effort deal with overall business processes - it focuses on one-to-one interaction, while ECIX is more oriented toward broadcast queries.

Introduction to Interweave Technology - Enabling Global Design Flows - Krishna Uppuruli, InterWeave

Krishna talked about current issues with design flows in a distributed world (with computing and engineering resources scattered around). The current solutions of hardwired frameworks or black-box scripts for power users does not work, due to issues of Re-use, knowledge management and distributed design challenges. He outlined a need for process management with tasks, documents, communication, knowledge and collaboration. He proposed a tiered solution that is end-user customizable to enable reuse and project management.

Dinner panel: What is the Role of the Tall, Thin Designer in Future Processes? - Gary Smith, Dataquest; Howard Sachs, Fujitsu

Howard said that to a large extent, the tall thin designer was just a myth. It was promulgated by the universities as a way of teaching the whole range of design activities, but real businesses have large infrastructure with many specialized roles. While there are many more domain interactions today, there are also many more problems, making it even harder to really be a tall thin designer.

Gary said that tool interactions are driving tool integration, but that he doesn't expect a single start-to-finish design cockpit. Instead, there will be two design environments; a front end core/communication based design environment, and a back end implementation environment in which synthesis, timing, PD, and verification take place, separated by an RTL design planning environment.

Design Tool Plug and Play: What's it all about? - Don Cottrell, Si2

Don stressed the importance of ability to move data across projects and flows. The design data may be original, derived or even meta data (descriptive information about the data). Technology data including process and cell library along with its characteristics needs to be exchanged. Considerations for the same involve:

He mentioned three data exchange techniques: Paper Spec, XML/DTD, and APIs in the order rigorous spec definitions to minimize the misinterpretations.

Discussion: Integration and Interoperability Requirements for IC Design in the Year 2000 - John Darringer, IBM

John discussed the factors driving the need for tightly integrated design tools, and described the Interoperability Workshop he is chairing at DAC this year, and solicited views from the attendees. A statement from the workshop is given above.

Physical Design Methodology for High Performance System-on-a-Chip Solutions for Multimedia Applications - Ram Sunder, National Semiconductor / Datamedia)

Ram talked about their 7 million Xtor chip in .4u technology, running at 80+ MHz speed, with SoC methodology. Future technology, in .25u, will be running at 250-300 MHz. The design challenges include:

These were met with extremely competitive die size (30% reduction each generation), very fast TTM with an efficient team. He has established a very good rapport with external CAD vendors. He differentiated between methodology and flows, saying that the former is design-centric and should drive the latter. The flow on the other hand focuses on how and who (e.g., which tool etc.)? He gave an example of power grid design, in the context of PQFP package. One of his issues that most of the current power grid, reliability type tools are of analysis nature, as opposed to help in correct-by-construction design. Some of the components are pad ring simulation, power grid design and simulation, signal EM and antenna rules etc. During prep stages, another area of focus is library characterization and verification (e.g., measurement of setup and hold values). During design planning, he talked about RTL and simultaneous floorplanning. He worked with vendors to add long ports, hierarchy, "top-down and bottom-up" obstruction model resolution etc. He emphasized a need for noise, cross-talk correct design. Other sources of problems are transition time violations and skew mismatches.

Software Architecture of the Nike CAD Design System - Nagbhushan Veerapaneni, Intel

Nag talked about Nike design system, the design and software reasons driving need for a new architecture. He outlined drawbacks of past architectures being non-modular, inconsistent look-n-feel, difficulty of data exchange between domains. He showed the new layered architecture starting with data-model, with engines (e.g., placers, routers, and compactor) above it and packages/tools on the top. He went over the pros and cons of the common data model. The core model (UCM) has connectivity and hierarchy, from which specific domain models for logic, circuit and layout are derived, followed by any tool specific information. To allow different hierarchies across domains, there is an incremental mapper, which does graph isomorphism to map objects across hierarchies. Persistence is taken care of by CIF (Content Indexable File) format, which divides a file into sections and entities. It supports dynamic types for enabling communication between an application and storage.

Design API Coalition (DAPIC) - Don Cottrell, Si2

Don talked about open source of infrastructure for CAD data API, and storage format in the areas of cell library (LEF), Design data (DEF), and Process library (SIPPs). The purpose is to enable application development on this common API and data representation, thus facilitating an easy mix of internal and external tools.

IBM's Integrated Data Model (IDM) - David Hathaway, IBM

Dave talked about IDM, which is IBM's internal unified data model. It supports view specific extensions, call backs for enabling application incremental processing (only used for invalidation - lazy evaluation used for recalculation) and legacy applications supported through a thin translation layer to old PIs. The IDM architecture has applications on the top, followed by APIs for logical, physical, electrical information (folded and occurrence models) going down to IDM data-structures.

Flow-Based Design Management - Peter van den Hamer, et al., Philips

Peter described design management as dealing with both design data and design methodology. Traditional design management deals with modification relationships (i.e., A1 is a modification of A0). Also need derivation relationships (i.e., x.o is derived from x.c).

Cable Design Reuse Across Hierarchical Boundaries - Rick Cook, Rik Vigeland, Mentor Graphics Corporation

Rik and Rick described issues with cable design. This is basically a documentation (design capture) problem. In the past much information was entered redundantly to provide different views. Goal is to eliminate redundant entry, provide additional generated views, and provide translation to/from many customer formats.capability.

Design Convergence - Shantanu Ganguly, Intel

Shantanu talked about convergence issues, with abstraction getting difficult (due to deep sub-micron effects), with communications getting poorer due to large and dispersed teams. Architects need to do performance vs. wirability type trade-offs early in the design cycle. Some of the solution ideas include an examination of metrics and track the 1st (area, timing, power etc.). Some other 2nd order metrics are becoming first order (e.g., worst case power supply drop), and new 2nd order metrics are appearing (e.g., locality effects due to process variations). Interaction between these metrics (no more decoupling between noise and timing analysis, Boolean equivalence being affected by temporal effects) is making the overall problem more complicated. There is a need for early data models to drive timing and placement, parasitic estimation, effect of noise on delay, integration of logic synthesis, formal verification and layout design.

Friday Panel on the Effects of Physical Design / Logic Synthesis Integration on Design Methodology

Dave Lackey, IBM ASICs

Dave talked about timing closure in today's methodology flow. He showed various loops that currently impact the Turn-Around Time (TAT). He proposed an improved deign closure flow, that first does an initial synthesis (w/o the image independent cell models), then full placement based synthesis is done. In the former part, mapping is done to I/Os, macros, SCBs and physical hierarchy is constructed with a floorplan. In the later part, the legal size locations for cell image models are used. This eliminates need for WLM (Wire Load Models) during synthesis-floorplanning front-end inner loop. In the back end inner loop, it improves correlation and allows for late synthesis-based correction methods, for post-placement timing fixes. The outer-loop is improved with reduced iterations and need to make fewer changes to the floorplan. He showed a multi-vendor early synthesis flow with HDP (floorplanning), DFTS, CPRO (both internal tools for scan and clocks) and other un-named vendor synthesis tools for RTL synthesis, flattening to PD hierarchy, placement-based synthesis and creating a placement. The complexity of this flow comes from multiple formats and lack of interoperability between vendor tools. He asked for operation-level APIs and integrated equivalence checking, and stressed the ability to have tools from multiple vendors in a single flow.

Bob Stear, Intel iA-64 project

Bob outlined the progression of logic synthesis to become more physical aware and in time will need to take care of things such as noise, reliability etc. He clarified that logic synthesis in a high performance uP is limited to a smaller percentage, due to being non-optimal for many cases such as domino, datapath, min-delay and low power etc. He gave some examples of where the effort is spent, and how in future the netlist level hand-off point to ASIC foundries will need to change. A detailed problem statement with cross-cap, inductive effects, power density, DFT and DFM will need to be taken into account for integrating not only the logic synthesis and physical design, but also extend all the way from uArch to manufacturing. In conclusion, he talked about a future need for an early planning system, with reduced details and accuracy with ability to balance and plan for all the key design constraints. This will help with rapid design convergence with heavy incremental capabilities in all the areas. He asked for a unified data-model to allow interoperability.

Patrick Groenveld, Magma design automation

Patrick talked about effect of timing and parasitics on cell sizes and area. The focused on the need for timing closure by combining the logical and physical worlds, while highlighting the inability of current routers in meeting the given delay constraints. This makes early design extraction not be very useful. Delay depends on gain, so if a wire gets twice as long, then the gate becomes half fast, however, if the gate is made bigger for same speed, then the input capacitance is increased. Magma tool tries to keep Cout/Cin gain to be constant during placement. It means that while the cell size changes during placement, while delay is (almost) constant. In summary, the old flow had cell area fixed (with delay is a gamble), and in the new scheme, delay is fixed while cell area is unknown. Their tool has the capacity to have 600,000 placeable objects. The tool does not use a common database architecture that may need each tool (placement, routing, timing, extraction etc.) to have internal data models and talk to the common DM via translators. Also, it would prevent things from working concurrently. Magma has an in-core shared data model between various tools, and other external tools can access the DM through a TCL access. Persistence is obtained via "volcano" disk writes. He gave examples of CFI, EDIF, VSI and PDEF etc. to say that plug-n-play and mix-n-match of tools are doomed. Instead the primary tool vendor set the framework and flow.

Dwight Hill, Synopsys

Dwight talked about need for timing constraint jocks, DC shell script jock and prime-time jock on a chip design automation team. In addition, the floorplan need can be fulfilled by either one of these learning new skills or another expert who can help with the tasks of partitioning, hierarchy building, I/O ring assignment. Lack of physical capability in synthesis has meant that users had to lock into a physical designer/vendor early. Integrating physical library and die work into synthesis invites the possibility of trading one silicon vendor against another. Dwight emphasized the need for floorplanning, as sizing alone can't fix the delay problems. He gave several examples and reasons of why an open database and open format, with a common DB between tools may be needed.