A Tale of Three Networks, AI Networking Infrastructure Surge Spans Multiple Protocols and Technologies

Scale-Up (Ethernet, NVLink, PCIe, UALink), Scale-Out (Ethernet, InfiniBand), and Front-End (Ethernet) Contribute to $50B+ in Incremental Networking Opportunity This Decade

AI’s extreme performance demands are driving divergence in networking architectures, featuring distinct protocols, technologies, and topologies across components. We expect companies to embrace multiple aspects of these challenges and develop broad portfolios by leveraging existing IP building blocks to meet these demands. Similar to how companies like Marvell approached custom silicon with the Hyperscalers, we expect Hyperscalers to drive requirements and commonality as the only way forward. For example, UALink uses the Ethernet SERDES to achieve higher speeds faster, and CPO will change how networking tiles get deployed on XPUs just as much as it influences an Ethernet Switch.

Scale-Up (Ethernet, NVLink, PCIe, UALink)

Scale-Up networking went from an HPC niche to a billion-dollar market in one quarter and is on track to be a stand-alone $25B market by the end of the decade, just in switch hardware. The market is much higher when including the IO and IP on the XPU cards. Scale-UP is the highest performance and lowest latency market. Within the copper domain, there are even implementations with liquid cooling into the midplane/backplane copper. This market is the most diverse in terms of protocols and technologies, and will see a massive inflection point when deployments become multi-rack. We expect this market to stay copper as long as possible, but become a larger opportunity for CPO during the CPO crossover. Over the near term, NVlink will dominate this segment.

Scale-out (Ethernet, InfiniBand)

Ethernet surpassed InfiniBand in 4Q’24 or 1Q’25, depending on how one accounts for optics. Ethernet will drive all the growth in the market from now on, as 800 Gbps goes through its exponential growth phase. We expect Scale-Out to be the test bed for CPO as it is an enclosed fabric with consistent link distances and no outside interoperability. Modules will likely dominate in this segment, but CPO will make up a growing part of the market as we exit the decade.

Front-End

The Front-End network market is the most challenging market to track, as the line between AI connections and the existing infrastructure blurs between product offerings and network locations. However, 1.6 Tbps is right around the corner here. We expect the market to remain mostly optical modules, but we can expect some innovations, such as Transmit-retimed only (TRO) modules that can reduce power by 30-40%. This will put 1.6 Tbps modules comfortably below 15 watts.

On the DCI front, ZR, ZR+, and Coherent Lite planned implementations continue to grow as facilities and buildings connect. While DCI can point to AI and the chase for power as a strong driver, the market is also at an inflection point of record-setting data center footprint that needs connections. After all, over 5,000 US data centers don’t work in isolation.

Future Convergence

While today and for most of this decade, these networks will innovate separately as a rush for tokens outweighs operational efficiency, at some point, a significant portion of these networks will begin to converge. While we can point to the convergence sometime next decade, Hyperscalers and the supply chain will clearly head in that direction and put the building blocks in place to make sure convergence does not catch them by surprise. It’s safe to say, this occurs in the Tbps camp, and we don’t have to wait for Pbps for AI fabrics to converge.