Scale-out (Backend) and Frontend All Achieve Record Revenue with 2025 Expected o Exceed $10B
Networks need to be specifically designed for AI workloads in order to achieve maximum performance. We can’t just use traditional networking. 2024 showed this with the rapid growth in InfiniBand, significant enhancements and purpose-built Ethernet products, and the start in deployment for scale-up networks. Each network element is designed for a specific task within the AI cluster. 650 Group expects the drive towards maximum performance to continue in 2025 with many technology breakthroughs. We view both training and inference as needing specifically designed networks in order to fully maximize the user/application experience and monetize the GPUs/XPUs. Suboptimal networking can waste billions of dollars in processor cycles or require expensive restarts.
Scale-up
2024 was the first time we saw scale-up networks escape the server enclosure. NVIDIA’s NVL72 allowed us to get rack-level scale-up for the first time beyond specialized supercomputers. Scale-up is a key technology that will rapidly grow in 2024. Based on our 4Q24 Networking AI report, we are projecting Scale-up networking to more than double in 2025 and exceed $10B in 2025. We project NVLink to be the most common technology but also forecast UALink, PCIe, and Ethernet).

Scale-out (Backend)
Scale-out infrastructure is rapidly moving towards Ethernet and is the key driver of 800G growth in 2025. While InfiniBand remains a key technology, by the end of 2025, Ethernet will be the dominant technology for Scale-out, and we will start to see the early 1.6T ramp. For 2025, Scale-out networking will grow over 100% and exceed $8B in revenue (no optics).
For Ethernet in 2024, the top 3 vendors for scale-out backend are (Celestica, NVIDIA, and Arista) (Figure 1). NVIDIA’s growth in scale-out Ethernet allowed them to be the fastest-growing vendor in Data Center Switching for 2024.
Frontend
Frontend networks connect the AI Cluster to the rest of the data center and perform a critical role in feeding data to the training model and having users connect for inference. While frontend networks look more like a traditional data center fabric from the x86 world, they are still bandwidth-intensive and going through a rapid speed migration. For 2025, frontend networking will grow by over 100% and exceed $5B in revenue.
For Ethernet in 2024, the top 4 vendors for Frontend are (Arista, Celestica, NVIDIA, and Cisco). NVIDIA’s growth in scale-out Ethernet allowed them to be the fastest-growing vendor in Data Center Switching for 2024.
New Market for Networking
All the above market sizes and vendor shares are incremental. Networking for AI is turning into a massive market for itself and driving the over-DC switching market to new heights.
Where the market goes in the next 12-18 months
The next few quarters will be exciting for the Networking for AI space. Ethernet will become the more prominent technology for networking, surpassing InfiniBand as 800G ramps and 1.6T take form. The 800G cycle for AI will set records for revenue and ports. We will begin to see CPOs in production instead of just announcements (let’s not forget that copper also plays an important role), with many vendors highlighting their plans between now and OFC. We will also begin to see technology that enables the convergence of networks for AI.