WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 137
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 137
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)
            
Arteris logo bk org rgb
WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 137
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 137
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)

CEO Interview: Charlie Janac of Arteris IP

CEO Interview: Charlie Janac of Arteris IP
by Daniel Nenni on 08-28-2020 at 6:00 am

charlie janac
Charlie Janac is president and CEO of Arteris IP where he is responsible for growing and establishing a strong global presence for the company that is pioneering the concept of NoC technology. Charlie’s career spans over 20 years and multiple industries including electronic design automation, semiconductor capital equipment, nanotechnology, industrial polymers and venture capital.

In the first decade of his career, he held various marketing and sales positions at Cadence Design Systems (NYSE: CDN) where he helped build it into one of the ten largest software companies in the world. He joined HLD Systems as president, shifting the company’s focus from consulting services to IC floor planning software and building the management, distribution and customer support organizations. He then formed Smart Machines, manufacturer of semiconductor automation equipment and sold it to Brooks Automation (NASDAQ: BRKS). After a year as Entrepreneur-in-Residence at Infinity Capital, a leading early-stage Venture Capital firm, where he consulted on Information Technology investment opportunities, he joined Nanomix as president and CEO helping build this start-up nano-technology company. Mr. Janac holds a B.S. and M.S. degree in Organic Chemistry from Tufts University and an M.B.A from Stanford Graduate School of Business.

Why is on-chip interconnect important for SoC innovation?
System-on-chip architectures are rapidly changing because we are moving from “data processing” chips to SoCs able to execute “decision making” models. The on-chip interconnect is the logical and physical means to create the SoC architecture so the importance of the network-on-chip (NoC) interconnect has increased as the need for architectural innovation has grown. As machine learning capabilities are being incorporated into a wider variety of SoCs, the new dataflow patterns driven by these heterogenous processing systems are driving far-reaching innovation in interconnect IP. Cache coherency is becoming more common to simplify software development and reduce system latency in these multicore SoCs. And the size of these machine learning subsystems is sometimes forcing chip architects to split their designs over multiple dies or packages which is causing innovation in chiplet connectivity IP that is tightly couple with, or even within, the interconnect. The bottom line is that interconnect IPs are becoming more complex and important as the number and complexity of SoC IP blocks grow and data flows become more sophisticated due the machine learning and cache coherent traffic.

What developments do you see that Arteris IP is able to address?
We’re at a very exciting time because an important ingredient for performant SoCs has clearly become the on-chip interconnect and all the SoC architectural changes by our customers are influencing our technology development. Soon you’ll see on-chip cache coherency interconnect IP with features such as multilevel caching of metadata and ability to handle multiple cache coherency protocols for processors of different characteristics – more specifically, both ARM CHI and ACE protocols simultaneously. Interconnect requirements for machine learning subsystems and SoCs have inspired specialized features such a broadcast and multicast, and the generation of large meshes to improve delivery of machine learning SoCs. In other types of SoCs, target/initiator type “tree” topologies are more efficient in terms of area, power and latency so flexibility is a key interconnect aspect.

System level and design methodology considerations also guide our technology. Tighter integration of NoC interconnects and industry standard memory controllers creates the opportunity for end-to-end quality-of-service (QoS), i.e., system-level runtime bandwidth and latency regulation, and ECC data protection. And we’re tightening the links between the logical/RTL and physical/floorplan “views” of NoC interconnects which reduces the number of place and route design cycles, shortening engineering schedules. In summary, Arteris IP is a critical enabler in the realization of these new SoC architectures through our unified cache coherent and non-coherent interconnect solutions based on network-on-chip technology.

How has importance of NoC interconnect changed since 2016?
First, SoCs have become so much larger that the need for interconnect scalability has greatly increased. The numbers of IP blocks connected in our customers’ chips is now often in the hundreds. And dataflow complexity is increasing as many of these IP blocks are being combined into hierarchical subsystems. Many NoC interconnect instances now exceed 10M gates, which used to be the size of an entire chip a few years ago. With die size, comes high power consumption, and our NoC interconnect technology has highly effective gating and power management functionality to minimize power consumption.

Second, on-chip dataflow requirements have changed and often conflict with each other. For example, on-chip bandwidth demands for some of our customers’ designs exceed 1 terabit/second, but these designs also often have portions with critical latency requirements, such as when processing elements are communicating with memories. But it’s not all about on-chip bandwidth. The importance of latency optimization, not just on-chip bandwidth, has grown as demands for overall SoC performance have increased, and without the configurable flexibility inherent in our NoC technology these requirements would inexorably conflict. Being able to model and implement such different use cases and then determine a NoC architecture that meets them both simultaneously puts pressure on the NoC EDA tool sets to deliver maturity, usability and required automation features. Physical awareness provides the knowledge of location and distance between NoC elements in relation to the other IP block locations on the chip floorplan, which supports latency optimization and automated pipeline insertion for rapid timing convergence estimation.

Third, increasing overall SoC performance requires running the interconnect and its associated on-chip memories at higher frequencies, sometime at the same clock frequency as the fastest processing element IP. But these designs often need to run at much lower frequencies to save power when dataflow is quiescent. State of the art interconnect must be able to support low frequencies for low power modes and chip designs all the way to 2Ghz+ for high performance designs.

Fourth, automated verification uses the information from NoC generation to automatically output test benches in a fraction of time required for manual verification.

The expense of delivering state-of-the-art interconnect solutions has also increased because NoC R&D investment must keep pace with overall SoC innovation. Of course, the value of NoC interconnect has increased so much that it is now one of the most important IPs in the SoCs. NoC interconnect has become a key determinant of on-time SoC delivery and feature quality.

How are superscalers like Google, Amazon, Facebook, Alibaba, Baidu and Microsoft affecting chip markets and value chain?
It’s no secret that superscalers (or system houses) are increasingly designing SoCs in-house, and many of these companies have become some of our most innovative users. This could be a pendulum that will swing back to commercial silicon or could be a permanent trend. Today, some very exciting SoCs are being designed by superscalar companies and the desire to build one’s own chip is being driven by the need to tightly integrate hardware and software to perform tasks unique to the superscaler, especially around machine learning and autonomous vehicles. Many of these companies deliver their value in terms of advanced software so they are building silicon to support this proprietary software more effectively. The software is driving the chip design, rather than the other way around.

Very few of the superscalers are doing the entire SoC design in-house including layout because this requires a large investment and large SoC volumes for this to be economical. Most are designing the SoC architecture and RTL and are partnering with semiconductor companies or design houses for the backend, physical design implementation. Because many of these companies are newer to chip design, they are focusing on delivering their main semiconductor IP value and using commercial IP for those parts of the SoC where they are not targeting their differentiation. This approach reduces risk and cost of SoC delivery compared to trying to develop everything in-house.

What is the status of automotive market? Arteris is the interconnect IP technology market leader for automotive applications so what do you see, especially in automated driving?

There are several counter trends in the automotive market. Shelter-in-place reduced the amount of car utilization but fear of public transportation and the move from central cities to areas where there is more space will be positive for car sales. Overall, there may be a temporary decline, but the car market will come back to previous health fairly quickly.

Some automated driving projects are being delayed and simplified by some companies though not by everyone. A few are investing heavily during this downturn while their competitors wait it out. And it’s not just semiconductor companies who are investing. It’s also Tier 1 automotive suppliers and OEMs making their own chips. It is clear now that you can achieve level four driving on a highway but that level four driving in the cities is challenging from legal, regulatory and technological perspectives. Whoever gets automated driving right will have tremendous competitive advantage and so those companies who can afford to invest in the downturn will gain substantial market share in the upturn.

Value in cars is steadily moving from the mechanics to the software and silicon. Electrification and ECU consolidation continue. Tesla is moving the entire automotive world to invest by innovating in battery, electrical motor, charging infrastructure and automated driving which is pushing other automotive players to keep pace. All this change in technology and business models is leading to a struggle between automotive semiconductor players, Tier 1 suppliers and Automotive OEMs about who will take the lead on transportation value delivery.

How is Arteris IP doing? – What challenges is Arteris IP currently facing in this new world?
Arteris IP has emerged as the technology leader in the interconnect IP space with the delivery of Ncore cache coherent interconnect, the FlexNoC AI Package, Resilience and CodaCache last level cache. We are an emerging industry standard and the trusted “go to” company for on-chip interconnect technology. Like with IBM, nobody will get fired for licensing Arteris IP! This is being recognized by the market to the point of Arteris IP having 145+ customers and billions of Arteris connected SoCs in production.

We’ve worked very hard to get here and we’ve invested more than any company in the world into interconnect technology. But it’s our investments in our people that are most important. Our focus on customer support by our sales, application engineering and engineering teams has become so critical to our success that we do not view ourselves as being in the interconnect IP business, but rather in the business of helping our customers deliver their SoCs. We strive every day to earn the privilege of being trusted partners with SoC architect, design, verification, functional safety and backend teams who rely on us.

A new challenge that we are all facing is the COVID-19 pandemic, which has introduced uncertainty into the semiconductor business. Customers are now asking questions about financial stability, cash balances and other supply chain questions. In the short term these concerns may lead to further IP supplier consolidation.

I also think the COVID crisis will result in important permanent changes. I think we will see the current “deglobalization” trend transform into what I would call “regionalism”: The world will be divided into the US and its associated countries, China and its associated countries, and the EU and its associated countries. This will make it very important for companies to “regionalize” products and support to look American to the Americans, Chinese to the Chinese and European to the Europeans. Medium size nations will not have the scale, by themselves, to invest in major technology initiatives such as semiconductor fabs, regional-scale transportation projects or major space programs so they will have to work on a regional scale to fund these. This regionalization will provide opportunities and risks for all international companies.

Also Read:

CEO Interview: Anna Fontanelli of Monozukuri

CEO Interview: Isabelle Geday of Magillem

CEO Interview: Ted Tewksbury of Eta Compute

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.