Mentor Calibre nmDRC

What we have put in place with the Calibre nanometer platform is to enable analysis that lets you solve those three problems. You can contrast this with what the platform looked like in the deep submicron area. Life was fairly simple for us. You took in GDS II and a netlist, you did design rule checking, you did LVS (layout versus schematic) to compare GDS II to see if it actually implemented the net list you intended and you did some arasitics. You wrote this in our geometry language and ran on maybe up to 8 processors. The output was error markings, some derived layout, device and parasitic net list. With a simple data query you've got these results into a data editing environment with the ability to write that data out and we were done.

The nanometer platform that we have put in place facilitates this entire handoff and analysis and enhancement to manufacturability.

The key is that you can not just go GDS II anymore. You need the ability to come out of and come into the native design environment because you are making enhancements to the design. Everyone is going to want to be able to make sure they are still making their timing closure within their native design environment. There are additional analysis routines that we put in place to enable us to look at yield manufacturability as well as new programming front end and hyperscaling to allow us to have a computational platform that lets us do all this analysis in subglacier time. To handle the issue of multiple analysis possibilities we can maintain what the original layout is and compare that against multiple optimization possibilities. There is also an incremental analysis and enhancement capability that lets us look at where those different options will send us in terms of the overall yield space as well as a report and visualization area where we can take that morass of error markers, turn it into real information about a design that a design can use to model their yield effects. That whole thing put together we are calling the Calibre nm platform.

What types of outputs does it generate?
First you have error markers. As an example you might have this DFM rule that says your spacing is less than 200 nm where your DRC is 130 nm. You get lots and lots of error markers which the designer has no idea what to do with. Then there is a Praeto analysis of all the violations of that particular rule identifying which are the worse violators and would be the best place to go off and spend your energy. You can also you see collected data over the entire chip for these rules which then lets you build out and understand where your biggest yield effects are overall and how this is going to impact your chip yield.

There a number of other reports such as different Praeto tools and hot spot mapping into the design data base. There is a by cell analysis which can make for interesting conversation with your IP provider as you are talking about what their deliverables should be not only in terms of functionality and performance but also for yield detractors.

For model based verification streams we are doing several things ranging from the critical area analysis tool that includes defect density model as well as systematic edge effects around the litho space. Probably the most interesting is tying that variability into our silicon modeling which goes off and determines not only the variability of transistors over the lithography but also their detailed context within active regions. Essentially this tells us is the deterministic variability of the design and enable people to have a smaller guard banding around what they are putting in place for parametric performance.

We have described fairly radical changes in terms of what customers need to ensure manufacturability especially as we go down towards and look forward to 45 nm. We are not only running more and more DRC checks but also adding all those other manufacturability analyses. In a classical environment that quite simply could run for weeks or months. With all the new capabilities we have built into both the nanometer DRC tool and the performance characteristic that it brings as well as all of the analysis, incremental and statistical gathering that we are able to build into the tools, we actually enable people to deal with that new environment within their existing design cycle.

When is Calibre nmDRC available?
Customers can gain access to this today. In terms of first customer ship in the middle of Q3 which is August.

What is the pricing of the offering?
Highly variable depending on the number of CPUs and which part of the analysis packages but at the same price as the old Calibre DRC.

Are current users of the older Calibre DRC entitled to a free or reduced price upgrade?
If they are on maintenance, then they get this as an upgrade.

What is the number of beta sites?
Twenty four as of yesterday.

How would you characterize their feedback?
It has been great because what we've done is gone in, loaded the software, and within about an hour they are operating configs. They run it. It's done in 2 or 3 hours. The reaction in general has been Wow. It's been really successful.

Whom do you see as competitors to the older and now the new Calibre?
In terms of existing tool we have market penetration up above 60%. There are tools in place out there from Cadence, Synopsys and Magma. I expect all three to continue to compete. From what we are seeing from the results, we expect to be significantly faster than all of them as well as having with the nanometer platform a far more complete roadmap for manufacturability.

If some one makes a design change there may be unintended ramifications and therefore a need to re-simulate the entire design. I understood you to say that if you make a change on the manufacturing side (line width or spacing), you can determine the maximum area of influence.
Yes. That's part of the incremental capability where if someone goes in and edits a particular region, our tool will find where the change was and verify only in the necessary region around that.

So changing the shape of geometry but not the function?
It depends on what someone is doing. If someone is doing something as extensive as timing check in the Place and Route tool and having to do a replace, rip-up and reroute, they will tend to do a full verification run rather than an incremental one. Incremental tends to be about whether what you are doing is essentially a custom layout editing of a particular region.

The target for Calibre nm is the more complex designs, lower processor nodes.
That's definitely the target. We expect that our customers will upgrade as a default. So even for the older 180 nm and 130 nm nodes, they will be taking advantage of the same technology because we can still run on those backwards compatible rule files. So they will essentially just get faster run times and be happy about it.

What are the ramifications of going forward to 45 nm and below? Will litho-friendly changes breakdown at some point? Do we need new breakthrough to go to lower processor nodes?
I would not be surprised to see other new analyses that need to be done. I will tell you an interesting story of how this is playing out. You hear all the time of the difficulty of doing timing closure due to variability of timing, that a cell put in one part of the design will behave significantly different than that cell in another part of the design. We did an experiment with one of our customers. They had a new 65 nm library. We ran that with our litho-friendly design tool to verify timing. We discovered that had done a really good job in designing the cells so that it would be invariant to placement because of lithoeffects. The variability was less than 5%. If you turn around and grab another standard cell library for 65 nm and do the same analysis, because that library was not designed to be invariant, there would be a huge amount of variability. With the second cell library you end up with difficulties in doing timing closure. With the first cell library they had far fewer problems. The aspects around doing these different analyses really can make an impact on the design where rather than things getting harder, they are getting easier.

« Previous Page 1 | 2 | 3 | 4 | 5  Next Page »


Review Article Be the first to review this article
Trimble Catalyst


Featured Video
Lead Geospatial Analyst for Alion at McLean, VA
CAD Systems Administrator for KLA-Tencor at Milpitas, CA
Principal Research Mechatronics Engineer for Verb Surgical at Mountain View, CA
Senior Mechanical Engineer for Verb Surgical at Mountain View, CA
Industrial Designer Intern – Spring 2017 for Nvidia at Santa Clara, CA
Upcoming Events
Connected Cities USA at Chicago IL - Mar 29 - 30, 2017
GeoDATA 2017 All-Ireland Showcase at Croke Park Jones' Road Dublin Ireland - Mar 30, 2017
SPTechCon 2017 at Austin Texas - Apr 2 - 5, 2017
Teledyne Optech
University of Denver GIS Masters Degree Online
InterDrone2017 - Countless CAD add-ons, plug-ins and more.

Internet Business Systems © 2017 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering EDACafe - Electronic Design Automation TechJobsCafe - Technical Jobs and Resumes  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy