Marvell’s Five-Year Announcement with Amazon’s AWS Showcases Increased Partnerships Between Suppliers and Hyperscalers

As We Approach $100B/Hyperscaler in CAPEX, Supply Chain Will Continue to Adjust

As Hyperscalers continue to push the envelope of what is technically feasible, we will see more announcements like the one earlier this month between Amazon and Marvell. The need to drive leading-edge technology at a more rapid pace will drive significant collaboration between Hyperscalers and the semiconductor and system companies that supply them.

Supply Chain Win for Everybody

Closer collaboration will reduce the time-to-market and help ease supply chain issues of scaling in several ways. First, semiconductor companies that have better visibility and do multiple chip designs can leverage expertise to improve yield through the FAB and leverage the FAB relationships. Second, given Hyperscalers’ buy in the 100K-1M+ range, the ability to get wafers, final assembly, and leverage IP across designs simplifies many things in the supply chain. Scale is a win for both the customer and the supplier. Second, long-term partnerships help drive better roadmap decisions. Third, using this announcement as an example, many parts of the chip design will be pre-validated in earlier ASICs. The AI ASIC SERDES can be leveraged in the DSPs, AECs, re-timers, and switch ASICs.

Large Markets

The markets addressed by this announcement are significant. If we look at our five-year forecasts, AI ASICs will rapidly approach $50B a year. The systems market for DC Switching will be north of $45B, Retimers is $1B+, AECs is $1B+, and the DSP business across intra-datacenter and DCI will drive $10B+ in transceivers revenue. Adding that together, these announcements will touch over $100B/year market in systems revenue by 2028. This is significant and shows the magnitude of how larger the Hyperscalers and the overall data center market are getting.

AI and Scale

AI is a different scale than traditional DC in terms of required power, number of ASICs, and bandwidth. As we look at future AI deployments for training and inference, we will continue to see new architectures to reach these new levels of scale. Today, we see this with the hunt for vast amounts of power and rapid innovation in scale-up and scale-out networks. Shortly after this, we will see more exotic forms of cooling and significant deployments in photonics for those interconnects.

Looking Towards 2025

As we look towards 2025, we expect the market to see more collaboration between semiconductor companies and the large Hyperscalers. We also expect some of the T2 Cloud providers to potentially adopt a collaborative approach as they look to differentiate their cloud offerings. Collaboration will lead to a more rapid pace of adoption of newer technologies (like 1.6T) in 2H’25.