Broadcast quality video encoder for all NTSC and PAL video standards
Nvidia Trains LLM on Chip Design
By Sally Ward-Foxton, EETimes (October 30, 2023)
Nvidia has trained its NeMo large language model (LLM) on internal data to help chip designers with tasks related to chip design, including answering general questions about chip design, summarizing bug documentation, and writing scripts for EDA tools. Nvidia’s chief scientist, Bill Dally, presented the LLM, dubbed ChipNeMo, in his keynote presentation at the International Conference on Computer-Aided Design today.
“The goal here is to make our designers more productive,” Dally told EE Times in an interview prior to the event. “If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that.”
E-mail This Article | Printer-Friendly Page |
Related News
- Fraunhofer IIS offers JPEG XS plugin for NVIDIA´s Holoscan for Media Architecture
- MIPS Continues To Expand With The Addition Of Industry Leaders from NVIDIA, Google and SiFive
- TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production
- Synopsys Showcases EDA Performance and Next-Gen Capabilities with NVIDIA Accelerated Computing, Generative AI and Omniverse
- Lattice Collaborates with NVIDIA to Accelerate Edge AI
Breaking News
- Esperanto Technologies and Rapidus Partner to Enable More Energy-Efficient Designs for the Coming "Post GPU Era"
- Actions Technology's smart watch SoC adopted VeriSilicon's 2.5D GPU IP
- New Automotive Grade Linux Platform Release Adds Cloud-Native Functionality, RISC-V Architecture and Flutter-Based Applications
- Digital Core Design in cooperation with DCD-SEMI Unveils DCAN-XL: Revolutionary CAN XL IP Core Bridging the Gap Between CAN FD and Ethernet
- Andes, HiRain, and HPMicro Join Hands to Build RISC-V AUTOSAR Software Ecosystem