NVM OTP in UMC (180nm, 153nm, 110nm, 90nm, 80nm, 55nm, 40nm, 28nm, 22nm)
Microsoft Outlines Hardware Architecture for Deep Learning on Intel FPGAs
May 12, 2017 -- At Build, Microsoft’s annual developers conference, taking place this week, Microsoft Azure CTO Mark Russinovich disclosed major advances in Microsoft’s hyperscale deployment of Intel® field programmable gate arrays (FPGAs). These advances have resulted in the industry’s fastest public cloud network, and new technology for acceleration of Deep Neural Networks (DNNs) that replicate “thinking” in a manner that’s conceptually similar to that of the human brain.
The advances offer performance, flexibility and scale, using super low latency networking to leverage the world’s largest cloud investment in FPGAs. The increases in networking speed achieved by this low latency networking will help business, government, healthcare, and universities better process Big Data workloads. Azure’s FPGA-based Accelerated Networking reduces inter-virtual machine latency by up to 10x while freeing the Intel® Xeon® processors for other tasks.
Russinovich also outlined a new cloud acceleration framework that Microsoft calls Hardware Microservices. The infrastructure used to deliver this acceleration is built on Intel® FPGAs. This new technology will enable accelerated computing services, such as Deep Neural Networks, to run in the cloud without any software required, resulting in large advances in speed and efficiency.
“From our early work accelerating Bing search using FPGAs added to the Intel Xeon processor-based servers, to this new Hardware Microservices model that underlies the Deep Neural Networks (DNNs) infrastructure that Mark discussed yesterday afternoon, Microsoft is continuing to invest in novel hardware acceleration infrastructure using Intel® FPGAs,” said Doug Burger, one of Microsoft’s Distinguished Engineers.
“Application and server acceleration requires more processing power today to handle large and diverse workloads, as well as a careful blending of low power and high performance—or performance per Watt, which FPGAs are known for,” said Dan McNamara, corporate vice president and general manager, Programmable Solutions Group, Intel. “Whether used to solve an important business problem, or decode a genomics sequence to help cure a disease, this kind of computing in the cloud, enabled by Microsoft with help from Intel FPGAs, provides a large benefit.”
See Microsoft Azure CTO Mark Russinovich’s presentation.
Learn more about Intel’s FPGAs for computing and storage.
|
Intel FPGA Hot IP
Related News
- ZTE Wireless Institute Achieves Performance Breakthrough for Deep Learning with Intel FPGAs
- Intel Delivers Real Time AI in Microsoft's New Accelerated Deep Learning Platform
- Nextera-Adeas ST 2110 IP cores are now available on Intel FPGAs
- Intel Launches Agilex 7 FPGAs with R-Tile, First FPGA with PCIe 5.0 and CXL Capabilities
- Intrinsic ID Announces Embedded SRAM PUF Security IP for Military-Grade IP protection in Intel FPGAs
Breaking News
- Thalia's AMALIA 24.2 introduces pioneering estimated parasitics feature to reduce PEX iterations by at least 30%
- TSMC plans 1.6nm process for 2026
- Qualitas Semiconductor Partners with TUV Rheinland Korea to Enhance ISO 26262 Functional Safety Management System
- M31 has successfully launched MIPI C/D PHY Combo IP on the advanced TSMC 5nm process
- Ceva multi-protocol wireless IP could simplify IoT MCU and SoC development
Most Popular
- Controversial former Arm China CEO founds RISC-V chip startup
- Siemens collaborates with TSMC on design tool certifications for the foundry's newest processes and other enablement milestones
- Credo at TSMC 2024 North America Technology Symposium
- Synopsys Accelerates Next-Level Chip Innovation on TSMC Advanced Processes
- Kalray Joins Arm Total Design, Extending Collaboration with Arm on Accelerated AI Processing
E-mail This Article | Printer-Friendly Page |